00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2239 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3498 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.260 Using shallow fetch with depth 1 00:00:00.260 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.260 > git --version # timeout=10 00:00:00.315 > git --version # 'git version 2.39.2' 00:00:00.315 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.344 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.344 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.136 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.149 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.164 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:06.164 > git config core.sparsecheckout # timeout=10 00:00:06.176 > git read-tree -mu HEAD # timeout=10 00:00:06.192 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:06.211 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:06.211 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:06.319 [Pipeline] Start of Pipeline 00:00:06.330 [Pipeline] library 00:00:06.332 Loading library shm_lib@master 00:00:06.332 Library shm_lib@master is cached. Copying from home. 00:00:06.346 [Pipeline] node 00:00:21.348 Still waiting to schedule task 00:00:21.348 Waiting for next available executor on ‘vagrant-vm-host’ 00:03:19.121 Running on VM-host-WFP1 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:03:19.123 [Pipeline] { 00:03:19.134 [Pipeline] catchError 00:03:19.136 [Pipeline] { 00:03:19.150 [Pipeline] wrap 00:03:19.160 [Pipeline] { 00:03:19.169 [Pipeline] stage 00:03:19.172 [Pipeline] { (Prologue) 00:03:19.192 [Pipeline] echo 00:03:19.194 Node: VM-host-WFP1 00:03:19.201 [Pipeline] cleanWs 00:03:19.211 [WS-CLEANUP] Deleting project workspace... 00:03:19.211 [WS-CLEANUP] Deferred wipeout is used... 00:03:19.217 [WS-CLEANUP] done 00:03:19.405 [Pipeline] setCustomBuildProperty 00:03:19.504 [Pipeline] httpRequest 00:03:19.909 [Pipeline] echo 00:03:19.911 Sorcerer 10.211.164.101 is alive 00:03:19.922 [Pipeline] retry 00:03:19.924 [Pipeline] { 00:03:19.939 [Pipeline] httpRequest 00:03:19.943 HttpMethod: GET 00:03:19.944 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:03:19.944 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:03:19.945 Response Code: HTTP/1.1 200 OK 00:03:19.946 Success: Status code 200 is in the accepted range: 200,404 00:03:19.947 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:03:20.094 [Pipeline] } 00:03:20.111 [Pipeline] // retry 00:03:20.120 [Pipeline] sh 00:03:20.405 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:03:20.420 [Pipeline] httpRequest 00:03:20.817 [Pipeline] echo 00:03:20.818 Sorcerer 10.211.164.101 is alive 00:03:20.830 [Pipeline] retry 00:03:20.832 [Pipeline] { 00:03:20.846 [Pipeline] httpRequest 00:03:20.850 HttpMethod: GET 00:03:20.851 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:03:20.851 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:03:20.852 Response Code: HTTP/1.1 200 OK 00:03:20.853 Success: Status code 200 is in the accepted range: 200,404 00:03:20.853 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:03:23.056 [Pipeline] } 00:03:23.070 [Pipeline] // retry 00:03:23.076 [Pipeline] sh 00:03:23.354 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:03:25.939 [Pipeline] sh 00:03:26.221 + git -C spdk log --oneline -n5 00:03:26.221 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:03:26.221 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:03:26.221 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:03:26.221 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:03:26.221 9469ea403 nvme/fio_plugin: add trim support 00:03:26.241 [Pipeline] writeFile 00:03:26.257 [Pipeline] sh 00:03:26.541 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:26.553 [Pipeline] sh 00:03:26.834 + cat autorun-spdk.conf 00:03:26.834 SPDK_TEST_UNITTEST=1 00:03:26.834 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:26.834 SPDK_TEST_NVME=1 00:03:26.834 SPDK_TEST_BLOCKDEV=1 00:03:26.834 SPDK_RUN_ASAN=1 00:03:26.834 SPDK_RUN_UBSAN=1 00:03:26.834 SPDK_TEST_RAID5=1 00:03:26.834 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:26.840 RUN_NIGHTLY=1 00:03:26.841 [Pipeline] } 00:03:26.853 [Pipeline] // stage 00:03:26.863 [Pipeline] stage 00:03:26.865 [Pipeline] { (Run VM) 00:03:26.873 [Pipeline] sh 00:03:27.150 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:27.150 + echo 'Start stage prepare_nvme.sh' 00:03:27.150 Start stage prepare_nvme.sh 00:03:27.150 + [[ -n 2 ]] 00:03:27.150 + disk_prefix=ex2 00:03:27.150 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:03:27.150 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:03:27.150 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:03:27.150 ++ SPDK_TEST_UNITTEST=1 00:03:27.150 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:27.150 ++ SPDK_TEST_NVME=1 00:03:27.150 ++ SPDK_TEST_BLOCKDEV=1 00:03:27.150 ++ SPDK_RUN_ASAN=1 00:03:27.150 ++ SPDK_RUN_UBSAN=1 00:03:27.150 ++ SPDK_TEST_RAID5=1 00:03:27.150 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:27.150 ++ RUN_NIGHTLY=1 00:03:27.150 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:03:27.150 + nvme_files=() 00:03:27.150 + declare -A nvme_files 00:03:27.150 + backend_dir=/var/lib/libvirt/images/backends 00:03:27.150 + nvme_files['nvme.img']=5G 00:03:27.150 + nvme_files['nvme-cmb.img']=5G 00:03:27.150 + nvme_files['nvme-multi0.img']=4G 00:03:27.150 + nvme_files['nvme-multi1.img']=4G 00:03:27.150 + nvme_files['nvme-multi2.img']=4G 00:03:27.150 + nvme_files['nvme-openstack.img']=8G 00:03:27.150 + nvme_files['nvme-zns.img']=5G 00:03:27.150 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:27.150 + (( SPDK_TEST_FTL == 1 )) 00:03:27.151 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:27.151 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:27.151 + for nvme in "${!nvme_files[@]}" 00:03:27.151 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:03:27.151 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:27.151 + for nvme in "${!nvme_files[@]}" 00:03:27.151 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:03:27.151 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:27.151 + for nvme in "${!nvme_files[@]}" 00:03:27.151 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:03:27.151 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:27.151 + for nvme in "${!nvme_files[@]}" 00:03:27.151 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:03:27.151 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:27.151 + for nvme in "${!nvme_files[@]}" 00:03:27.151 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:03:27.151 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:27.409 + for nvme in "${!nvme_files[@]}" 00:03:27.409 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:03:27.409 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:27.409 + for nvme in "${!nvme_files[@]}" 00:03:27.409 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:03:27.409 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:27.409 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:03:27.409 + echo 'End stage prepare_nvme.sh' 00:03:27.409 End stage prepare_nvme.sh 00:03:27.420 [Pipeline] sh 00:03:27.700 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:27.701 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -H -a -v -f ubuntu2204 00:03:27.701 00:03:27.701 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:03:27.701 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:03:27.701 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:03:27.701 HELP=0 00:03:27.701 DRY_RUN=0 00:03:27.701 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img, 00:03:27.701 NVME_DISKS_TYPE=nvme, 00:03:27.701 NVME_AUTO_CREATE=0 00:03:27.701 NVME_DISKS_NAMESPACES=, 00:03:27.701 NVME_CMB=, 00:03:27.701 NVME_PMR=, 00:03:27.701 NVME_ZNS=, 00:03:27.701 NVME_MS=, 00:03:27.701 NVME_FDP=, 00:03:27.701 SPDK_VAGRANT_DISTRO=ubuntu2204 00:03:27.701 SPDK_VAGRANT_VMCPU=10 00:03:27.701 SPDK_VAGRANT_VMRAM=12288 00:03:27.701 SPDK_VAGRANT_PROVIDER=libvirt 00:03:27.701 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:27.701 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:27.701 SPDK_OPENSTACK_NETWORK=0 00:03:27.701 VAGRANT_PACKAGE_BOX=0 00:03:27.701 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:27.701 FORCE_DISTRO=true 00:03:27.701 VAGRANT_BOX_VERSION= 00:03:27.701 EXTRA_VAGRANTFILES= 00:03:27.701 NIC_MODEL=e1000 00:03:27.701 00:03:27.701 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:03:27.701 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:03:30.983 Bringing machine 'default' up with 'libvirt' provider... 00:03:32.360 ==> default: Creating image (snapshot of base box volume). 00:03:32.619 ==> default: Creating domain with the following settings... 00:03:32.619 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1727785393_d593ec4f4d715bc45f4d 00:03:32.619 ==> default: -- Domain type: kvm 00:03:32.619 ==> default: -- Cpus: 10 00:03:32.619 ==> default: -- Feature: acpi 00:03:32.619 ==> default: -- Feature: apic 00:03:32.619 ==> default: -- Feature: pae 00:03:32.619 ==> default: -- Memory: 12288M 00:03:32.619 ==> default: -- Memory Backing: hugepages: 00:03:32.619 ==> default: -- Management MAC: 00:03:32.619 ==> default: -- Loader: 00:03:32.619 ==> default: -- Nvram: 00:03:32.619 ==> default: -- Base box: spdk/ubuntu2204 00:03:32.619 ==> default: -- Storage pool: default 00:03:32.619 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1727785393_d593ec4f4d715bc45f4d.img (20G) 00:03:32.619 ==> default: -- Volume Cache: default 00:03:32.619 ==> default: -- Kernel: 00:03:32.619 ==> default: -- Initrd: 00:03:32.619 ==> default: -- Graphics Type: vnc 00:03:32.619 ==> default: -- Graphics Port: -1 00:03:32.619 ==> default: -- Graphics IP: 127.0.0.1 00:03:32.619 ==> default: -- Graphics Password: Not defined 00:03:32.619 ==> default: -- Video Type: cirrus 00:03:32.619 ==> default: -- Video VRAM: 9216 00:03:32.619 ==> default: -- Sound Type: 00:03:32.619 ==> default: -- Keymap: en-us 00:03:32.619 ==> default: -- TPM Path: 00:03:32.619 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:32.619 ==> default: -- Command line args: 00:03:32.619 ==> default: -> value=-device, 00:03:32.619 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:03:32.619 ==> default: -> value=-drive, 00:03:32.619 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:03:32.619 ==> default: -> value=-device, 00:03:32.619 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:32.878 ==> default: Creating shared folders metadata... 00:03:32.878 ==> default: Starting domain. 00:03:34.787 ==> default: Waiting for domain to get an IP address... 00:03:44.763 ==> default: Waiting for SSH to become available... 00:03:47.306 ==> default: Configuring and enabling network interfaces... 00:03:52.574 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:57.837 ==> default: Mounting SSHFS shared folder... 00:03:58.406 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:03:58.406 ==> default: Checking Mount.. 00:03:59.410 ==> default: Folder Successfully Mounted! 00:03:59.410 ==> default: Running provisioner: file... 00:03:59.669 default: ~/.gitconfig => .gitconfig 00:03:59.927 00:03:59.927 SUCCESS! 00:03:59.927 00:03:59.927 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:03:59.927 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:59.927 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:03:59.927 00:03:59.936 [Pipeline] } 00:03:59.948 [Pipeline] // stage 00:03:59.973 [Pipeline] dir 00:03:59.973 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:03:59.975 [Pipeline] { 00:03:59.985 [Pipeline] catchError 00:03:59.986 [Pipeline] { 00:03:59.997 [Pipeline] sh 00:04:00.278 + vagrant ssh-config --host vagrant 00:04:00.278 + sed -ne /^Host/,$p 00:04:00.278 + tee ssh_conf 00:04:03.568 Host vagrant 00:04:03.568 HostName 192.168.121.40 00:04:03.568 User vagrant 00:04:03.568 Port 22 00:04:03.568 UserKnownHostsFile /dev/null 00:04:03.568 StrictHostKeyChecking no 00:04:03.568 PasswordAuthentication no 00:04:03.568 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:04:03.568 IdentitiesOnly yes 00:04:03.568 LogLevel FATAL 00:04:03.568 ForwardAgent yes 00:04:03.568 ForwardX11 yes 00:04:03.568 00:04:03.583 [Pipeline] withEnv 00:04:03.586 [Pipeline] { 00:04:03.604 [Pipeline] sh 00:04:03.894 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:03.894 source /etc/os-release 00:04:03.894 [[ -e /image.version ]] && img=$(< /image.version) 00:04:03.894 # Minimal, systemd-like check. 00:04:03.894 if [[ -e /.dockerenv ]]; then 00:04:03.894 # Clear garbage from the node's name: 00:04:03.894 # agt-er_autotest_547-896 -> autotest_547-896 00:04:03.894 # $HOSTNAME is the actual container id 00:04:03.894 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:03.894 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:03.894 # We can assume this is a mount from a host where container is running, 00:04:03.894 # so fetch its hostname to easily identify the target swarm worker. 00:04:03.894 container="$(< /etc/hostname) ($agent)" 00:04:03.894 else 00:04:03.894 # Fallback 00:04:03.894 container=$agent 00:04:03.894 fi 00:04:03.894 fi 00:04:03.894 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:03.894 00:04:04.165 [Pipeline] } 00:04:04.184 [Pipeline] // withEnv 00:04:04.193 [Pipeline] setCustomBuildProperty 00:04:04.211 [Pipeline] stage 00:04:04.214 [Pipeline] { (Tests) 00:04:04.235 [Pipeline] sh 00:04:04.516 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:04.786 [Pipeline] sh 00:04:05.066 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:05.344 [Pipeline] timeout 00:04:05.345 Timeout set to expire in 1 hr 30 min 00:04:05.348 [Pipeline] { 00:04:05.361 [Pipeline] sh 00:04:05.643 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:06.210 HEAD is now at 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:04:06.223 [Pipeline] sh 00:04:06.506 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:06.774 [Pipeline] sh 00:04:07.050 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:07.323 [Pipeline] sh 00:04:07.604 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:04:07.896 ++ readlink -f spdk_repo 00:04:07.896 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:07.896 + [[ -n /home/vagrant/spdk_repo ]] 00:04:07.896 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:07.896 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:07.896 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:07.896 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:07.896 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:07.896 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:04:07.896 + cd /home/vagrant/spdk_repo 00:04:07.896 + source /etc/os-release 00:04:07.896 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:04:07.896 ++ NAME=Ubuntu 00:04:07.896 ++ VERSION_ID=22.04 00:04:07.896 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:04:07.896 ++ VERSION_CODENAME=jammy 00:04:07.896 ++ ID=ubuntu 00:04:07.896 ++ ID_LIKE=debian 00:04:07.896 ++ HOME_URL=https://www.ubuntu.com/ 00:04:07.896 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:04:07.896 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:04:07.896 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:04:07.896 ++ UBUNTU_CODENAME=jammy 00:04:07.896 + uname -a 00:04:07.896 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:04:07.896 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:08.154 Hugepages 00:04:08.154 node hugesize free / total 00:04:08.154 node0 1048576kB 0 / 0 00:04:08.154 node0 2048kB 0 / 0 00:04:08.154 00:04:08.154 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.154 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:08.154 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:08.154 + rm -f /tmp/spdk-ld-path 00:04:08.154 + source autorun-spdk.conf 00:04:08.154 ++ SPDK_TEST_UNITTEST=1 00:04:08.154 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:08.154 ++ SPDK_TEST_NVME=1 00:04:08.154 ++ SPDK_TEST_BLOCKDEV=1 00:04:08.154 ++ SPDK_RUN_ASAN=1 00:04:08.154 ++ SPDK_RUN_UBSAN=1 00:04:08.154 ++ SPDK_TEST_RAID5=1 00:04:08.154 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:08.154 ++ RUN_NIGHTLY=1 00:04:08.154 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:08.154 + [[ -n '' ]] 00:04:08.154 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:08.154 + for M in /var/spdk/build-*-manifest.txt 00:04:08.154 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:08.154 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:08.154 + for M in /var/spdk/build-*-manifest.txt 00:04:08.154 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:08.154 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:08.154 ++ uname 00:04:08.154 + [[ Linux == \L\i\n\u\x ]] 00:04:08.154 + sudo dmesg -T 00:04:08.154 + sudo dmesg --clear 00:04:08.412 + dmesg_pid=2108 00:04:08.412 + [[ Ubuntu == FreeBSD ]] 00:04:08.412 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:08.412 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:08.412 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:08.412 + sudo dmesg -Tw 00:04:08.412 + [[ -x /usr/src/fio-static/fio ]] 00:04:08.412 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:08.412 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:08.412 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:08.412 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:04:08.412 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:08.412 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:08.412 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:08.412 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:08.412 Test configuration: 00:04:08.412 SPDK_TEST_UNITTEST=1 00:04:08.412 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:08.412 SPDK_TEST_NVME=1 00:04:08.412 SPDK_TEST_BLOCKDEV=1 00:04:08.412 SPDK_RUN_ASAN=1 00:04:08.412 SPDK_RUN_UBSAN=1 00:04:08.412 SPDK_TEST_RAID5=1 00:04:08.412 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:08.413 RUN_NIGHTLY=1 12:23:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:08.413 12:23:50 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:08.413 12:23:50 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.413 12:23:50 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.413 12:23:50 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:08.413 12:23:50 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:08.413 12:23:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:08.413 12:23:50 -- paths/export.sh@5 -- $ export PATH 00:04:08.413 12:23:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:08.413 12:23:50 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:08.413 12:23:50 -- common/autobuild_common.sh@440 -- $ date +%s 00:04:08.413 12:23:50 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1727785430.XXXXXX 00:04:08.413 12:23:50 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1727785430.rBipYN 00:04:08.413 12:23:50 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:04:08.413 12:23:50 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:04:08.413 12:23:50 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:08.413 12:23:50 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:08.413 12:23:50 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:08.413 12:23:50 -- common/autobuild_common.sh@456 -- $ get_config_params 00:04:08.413 12:23:50 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:04:08.413 12:23:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.413 12:23:50 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:04:08.413 12:23:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:08.413 12:23:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:08.413 12:23:50 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:08.413 12:23:50 -- spdk/autobuild.sh@16 -- $ date -u 00:04:08.413 Tue Oct 1 12:23:50 UTC 2024 00:04:08.413 12:23:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:08.413 LTS-66-g726a04d70 00:04:08.413 12:23:50 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:08.413 12:23:50 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:08.413 12:23:50 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:04:08.413 12:23:50 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:04:08.413 12:23:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.413 ************************************ 00:04:08.413 START TEST asan 00:04:08.413 ************************************ 00:04:08.413 using asan 00:04:08.413 12:23:50 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:04:08.413 00:04:08.413 real 0m0.000s 00:04:08.413 user 0m0.000s 00:04:08.413 sys 0m0.000s 00:04:08.413 12:23:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:08.413 ************************************ 00:04:08.413 END TEST asan 00:04:08.413 ************************************ 00:04:08.413 12:23:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.413 12:23:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:08.413 12:23:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:08.413 12:23:50 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:04:08.413 12:23:50 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:04:08.413 12:23:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.413 ************************************ 00:04:08.413 START TEST ubsan 00:04:08.413 ************************************ 00:04:08.413 using ubsan 00:04:08.413 12:23:50 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:04:08.413 00:04:08.413 real 0m0.000s 00:04:08.413 user 0m0.000s 00:04:08.413 sys 0m0.000s 00:04:08.413 12:23:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:08.413 ************************************ 00:04:08.413 END TEST ubsan 00:04:08.413 ************************************ 00:04:08.413 12:23:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.671 12:23:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:08.671 12:23:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:08.671 12:23:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:08.671 12:23:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:08.671 12:23:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:08.671 12:23:50 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:04:08.671 12:23:50 -- spdk/autobuild.sh@58 -- $ unittest_build 00:04:08.671 12:23:50 -- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build 00:04:08.671 12:23:50 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:04:08.671 12:23:50 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:04:08.672 12:23:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.672 ************************************ 00:04:08.672 START TEST unittest_build 00:04:08.672 ************************************ 00:04:08.672 12:23:50 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:04:08.672 12:23:50 -- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:04:08.672 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:08.672 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:09.239 Using 'verbs' RDMA provider 00:04:27.913 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:04:42.799 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:04:42.799 Creating mk/config.mk...done. 00:04:42.799 Creating mk/cc.flags.mk...done. 00:04:42.799 Type 'make' to build. 00:04:42.799 12:24:24 -- common/autobuild_common.sh@408 -- $ make -j10 00:04:42.799 make[1]: Nothing to be done for 'all'. 00:04:57.674 The Meson build system 00:04:57.674 Version: 1.4.0 00:04:57.674 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:57.674 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:57.674 Build type: native build 00:04:57.674 Program cat found: YES (/usr/bin/cat) 00:04:57.674 Project name: DPDK 00:04:57.674 Project version: 23.11.0 00:04:57.674 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:04:57.674 C linker for the host machine: cc ld.bfd 2.38 00:04:57.674 Host machine cpu family: x86_64 00:04:57.674 Host machine cpu: x86_64 00:04:57.674 Message: ## Building in Developer Mode ## 00:04:57.674 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:57.674 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:57.674 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:57.674 Program python3 found: YES (/usr/bin/python3) 00:04:57.674 Program cat found: YES (/usr/bin/cat) 00:04:57.674 Compiler for C supports arguments -march=native: YES 00:04:57.674 Checking for size of "void *" : 8 00:04:57.674 Checking for size of "void *" : 8 (cached) 00:04:57.674 Library m found: YES 00:04:57.674 Library numa found: YES 00:04:57.674 Has header "numaif.h" : YES 00:04:57.674 Library fdt found: NO 00:04:57.674 Library execinfo found: NO 00:04:57.674 Has header "execinfo.h" : YES 00:04:57.674 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:04:57.674 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:57.674 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:57.674 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:57.674 Run-time dependency openssl found: YES 3.0.2 00:04:57.674 Run-time dependency libpcap found: NO (tried pkgconfig) 00:04:57.674 Library pcap found: NO 00:04:57.674 Compiler for C supports arguments -Wcast-qual: YES 00:04:57.674 Compiler for C supports arguments -Wdeprecated: YES 00:04:57.674 Compiler for C supports arguments -Wformat: YES 00:04:57.674 Compiler for C supports arguments -Wformat-nonliteral: YES 00:04:57.674 Compiler for C supports arguments -Wformat-security: YES 00:04:57.674 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:57.674 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:57.674 Compiler for C supports arguments -Wnested-externs: YES 00:04:57.674 Compiler for C supports arguments -Wold-style-definition: YES 00:04:57.674 Compiler for C supports arguments -Wpointer-arith: YES 00:04:57.674 Compiler for C supports arguments -Wsign-compare: YES 00:04:57.674 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:57.674 Compiler for C supports arguments -Wundef: YES 00:04:57.675 Compiler for C supports arguments -Wwrite-strings: YES 00:04:57.675 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:57.675 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:57.675 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:57.675 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:57.675 Program objdump found: YES (/usr/bin/objdump) 00:04:57.675 Compiler for C supports arguments -mavx512f: YES 00:04:57.675 Checking if "AVX512 checking" compiles: YES 00:04:57.675 Fetching value of define "__SSE4_2__" : 1 00:04:57.675 Fetching value of define "__AES__" : 1 00:04:57.675 Fetching value of define "__AVX__" : 1 00:04:57.675 Fetching value of define "__AVX2__" : 1 00:04:57.675 Fetching value of define "__AVX512BW__" : 1 00:04:57.675 Fetching value of define "__AVX512CD__" : 1 00:04:57.675 Fetching value of define "__AVX512DQ__" : 1 00:04:57.675 Fetching value of define "__AVX512F__" : 1 00:04:57.675 Fetching value of define "__AVX512VL__" : 1 00:04:57.675 Fetching value of define "__PCLMUL__" : 1 00:04:57.675 Fetching value of define "__RDRND__" : 1 00:04:57.675 Fetching value of define "__RDSEED__" : 1 00:04:57.675 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:57.675 Fetching value of define "__znver1__" : (undefined) 00:04:57.675 Fetching value of define "__znver2__" : (undefined) 00:04:57.675 Fetching value of define "__znver3__" : (undefined) 00:04:57.675 Fetching value of define "__znver4__" : (undefined) 00:04:57.675 Library asan found: YES 00:04:57.675 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:57.675 Message: lib/log: Defining dependency "log" 00:04:57.675 Message: lib/kvargs: Defining dependency "kvargs" 00:04:57.675 Message: lib/telemetry: Defining dependency "telemetry" 00:04:57.675 Library rt found: YES 00:04:57.675 Checking for function "getentropy" : NO 00:04:57.675 Message: lib/eal: Defining dependency "eal" 00:04:57.675 Message: lib/ring: Defining dependency "ring" 00:04:57.675 Message: lib/rcu: Defining dependency "rcu" 00:04:57.675 Message: lib/mempool: Defining dependency "mempool" 00:04:57.675 Message: lib/mbuf: Defining dependency "mbuf" 00:04:57.675 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:57.675 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:57.675 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:57.675 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:57.675 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:57.675 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:57.675 Compiler for C supports arguments -mpclmul: YES 00:04:57.675 Compiler for C supports arguments -maes: YES 00:04:57.675 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:57.675 Compiler for C supports arguments -mavx512bw: YES 00:04:57.675 Compiler for C supports arguments -mavx512dq: YES 00:04:57.675 Compiler for C supports arguments -mavx512vl: YES 00:04:57.675 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:57.675 Compiler for C supports arguments -mavx2: YES 00:04:57.675 Compiler for C supports arguments -mavx: YES 00:04:57.675 Message: lib/net: Defining dependency "net" 00:04:57.675 Message: lib/meter: Defining dependency "meter" 00:04:57.675 Message: lib/ethdev: Defining dependency "ethdev" 00:04:57.675 Message: lib/pci: Defining dependency "pci" 00:04:57.675 Message: lib/cmdline: Defining dependency "cmdline" 00:04:57.675 Message: lib/hash: Defining dependency "hash" 00:04:57.675 Message: lib/timer: Defining dependency "timer" 00:04:57.675 Message: lib/compressdev: Defining dependency "compressdev" 00:04:57.675 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:57.675 Message: lib/dmadev: Defining dependency "dmadev" 00:04:57.675 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:57.675 Message: lib/power: Defining dependency "power" 00:04:57.675 Message: lib/reorder: Defining dependency "reorder" 00:04:57.675 Message: lib/security: Defining dependency "security" 00:04:57.675 Has header "linux/userfaultfd.h" : YES 00:04:57.675 Has header "linux/vduse.h" : YES 00:04:57.675 Message: lib/vhost: Defining dependency "vhost" 00:04:57.675 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:57.675 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:57.675 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:57.675 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:57.675 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:57.675 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:57.675 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:57.675 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:57.675 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:57.675 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:57.675 Program doxygen found: YES (/usr/bin/doxygen) 00:04:57.675 Configuring doxy-api-html.conf using configuration 00:04:57.675 Configuring doxy-api-man.conf using configuration 00:04:57.675 Program mandb found: YES (/usr/bin/mandb) 00:04:57.675 Program sphinx-build found: NO 00:04:57.675 Configuring rte_build_config.h using configuration 00:04:57.675 Message: 00:04:57.675 ================= 00:04:57.675 Applications Enabled 00:04:57.675 ================= 00:04:57.675 00:04:57.675 apps: 00:04:57.675 00:04:57.675 00:04:57.675 Message: 00:04:57.675 ================= 00:04:57.675 Libraries Enabled 00:04:57.675 ================= 00:04:57.675 00:04:57.675 libs: 00:04:57.675 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:57.675 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:57.675 cryptodev, dmadev, power, reorder, security, vhost, 00:04:57.675 00:04:57.675 Message: 00:04:57.675 =============== 00:04:57.675 Drivers Enabled 00:04:57.675 =============== 00:04:57.675 00:04:57.675 common: 00:04:57.675 00:04:57.675 bus: 00:04:57.675 pci, vdev, 00:04:57.675 mempool: 00:04:57.675 ring, 00:04:57.675 dma: 00:04:57.675 00:04:57.675 net: 00:04:57.675 00:04:57.675 crypto: 00:04:57.675 00:04:57.675 compress: 00:04:57.675 00:04:57.675 vdpa: 00:04:57.675 00:04:57.675 00:04:57.675 Message: 00:04:57.675 ================= 00:04:57.675 Content Skipped 00:04:57.675 ================= 00:04:57.675 00:04:57.675 apps: 00:04:57.675 dumpcap: explicitly disabled via build config 00:04:57.675 graph: explicitly disabled via build config 00:04:57.675 pdump: explicitly disabled via build config 00:04:57.675 proc-info: explicitly disabled via build config 00:04:57.675 test-acl: explicitly disabled via build config 00:04:57.675 test-bbdev: explicitly disabled via build config 00:04:57.675 test-cmdline: explicitly disabled via build config 00:04:57.675 test-compress-perf: explicitly disabled via build config 00:04:57.675 test-crypto-perf: explicitly disabled via build config 00:04:57.675 test-dma-perf: explicitly disabled via build config 00:04:57.675 test-eventdev: explicitly disabled via build config 00:04:57.675 test-fib: explicitly disabled via build config 00:04:57.675 test-flow-perf: explicitly disabled via build config 00:04:57.675 test-gpudev: explicitly disabled via build config 00:04:57.675 test-mldev: explicitly disabled via build config 00:04:57.675 test-pipeline: explicitly disabled via build config 00:04:57.675 test-pmd: explicitly disabled via build config 00:04:57.675 test-regex: explicitly disabled via build config 00:04:57.675 test-sad: explicitly disabled via build config 00:04:57.675 test-security-perf: explicitly disabled via build config 00:04:57.675 00:04:57.675 libs: 00:04:57.675 metrics: explicitly disabled via build config 00:04:57.675 acl: explicitly disabled via build config 00:04:57.675 bbdev: explicitly disabled via build config 00:04:57.675 bitratestats: explicitly disabled via build config 00:04:57.675 bpf: explicitly disabled via build config 00:04:57.675 cfgfile: explicitly disabled via build config 00:04:57.675 distributor: explicitly disabled via build config 00:04:57.675 efd: explicitly disabled via build config 00:04:57.675 eventdev: explicitly disabled via build config 00:04:57.675 dispatcher: explicitly disabled via build config 00:04:57.675 gpudev: explicitly disabled via build config 00:04:57.675 gro: explicitly disabled via build config 00:04:57.675 gso: explicitly disabled via build config 00:04:57.675 ip_frag: explicitly disabled via build config 00:04:57.675 jobstats: explicitly disabled via build config 00:04:57.675 latencystats: explicitly disabled via build config 00:04:57.675 lpm: explicitly disabled via build config 00:04:57.675 member: explicitly disabled via build config 00:04:57.675 pcapng: explicitly disabled via build config 00:04:57.675 rawdev: explicitly disabled via build config 00:04:57.675 regexdev: explicitly disabled via build config 00:04:57.675 mldev: explicitly disabled via build config 00:04:57.675 rib: explicitly disabled via build config 00:04:57.675 sched: explicitly disabled via build config 00:04:57.675 stack: explicitly disabled via build config 00:04:57.675 ipsec: explicitly disabled via build config 00:04:57.675 pdcp: explicitly disabled via build config 00:04:57.675 fib: explicitly disabled via build config 00:04:57.675 port: explicitly disabled via build config 00:04:57.675 pdump: explicitly disabled via build config 00:04:57.675 table: explicitly disabled via build config 00:04:57.675 pipeline: explicitly disabled via build config 00:04:57.675 graph: explicitly disabled via build config 00:04:57.675 node: explicitly disabled via build config 00:04:57.675 00:04:57.676 drivers: 00:04:57.676 common/cpt: not in enabled drivers build config 00:04:57.676 common/dpaax: not in enabled drivers build config 00:04:57.676 common/iavf: not in enabled drivers build config 00:04:57.676 common/idpf: not in enabled drivers build config 00:04:57.676 common/mvep: not in enabled drivers build config 00:04:57.676 common/octeontx: not in enabled drivers build config 00:04:57.676 bus/auxiliary: not in enabled drivers build config 00:04:57.676 bus/cdx: not in enabled drivers build config 00:04:57.676 bus/dpaa: not in enabled drivers build config 00:04:57.676 bus/fslmc: not in enabled drivers build config 00:04:57.676 bus/ifpga: not in enabled drivers build config 00:04:57.676 bus/platform: not in enabled drivers build config 00:04:57.676 bus/vmbus: not in enabled drivers build config 00:04:57.676 common/cnxk: not in enabled drivers build config 00:04:57.676 common/mlx5: not in enabled drivers build config 00:04:57.676 common/nfp: not in enabled drivers build config 00:04:57.676 common/qat: not in enabled drivers build config 00:04:57.676 common/sfc_efx: not in enabled drivers build config 00:04:57.676 mempool/bucket: not in enabled drivers build config 00:04:57.676 mempool/cnxk: not in enabled drivers build config 00:04:57.676 mempool/dpaa: not in enabled drivers build config 00:04:57.676 mempool/dpaa2: not in enabled drivers build config 00:04:57.676 mempool/octeontx: not in enabled drivers build config 00:04:57.676 mempool/stack: not in enabled drivers build config 00:04:57.676 dma/cnxk: not in enabled drivers build config 00:04:57.676 dma/dpaa: not in enabled drivers build config 00:04:57.676 dma/dpaa2: not in enabled drivers build config 00:04:57.676 dma/hisilicon: not in enabled drivers build config 00:04:57.676 dma/idxd: not in enabled drivers build config 00:04:57.676 dma/ioat: not in enabled drivers build config 00:04:57.676 dma/skeleton: not in enabled drivers build config 00:04:57.676 net/af_packet: not in enabled drivers build config 00:04:57.676 net/af_xdp: not in enabled drivers build config 00:04:57.676 net/ark: not in enabled drivers build config 00:04:57.676 net/atlantic: not in enabled drivers build config 00:04:57.676 net/avp: not in enabled drivers build config 00:04:57.676 net/axgbe: not in enabled drivers build config 00:04:57.676 net/bnx2x: not in enabled drivers build config 00:04:57.676 net/bnxt: not in enabled drivers build config 00:04:57.676 net/bonding: not in enabled drivers build config 00:04:57.676 net/cnxk: not in enabled drivers build config 00:04:57.676 net/cpfl: not in enabled drivers build config 00:04:57.676 net/cxgbe: not in enabled drivers build config 00:04:57.676 net/dpaa: not in enabled drivers build config 00:04:57.676 net/dpaa2: not in enabled drivers build config 00:04:57.676 net/e1000: not in enabled drivers build config 00:04:57.676 net/ena: not in enabled drivers build config 00:04:57.676 net/enetc: not in enabled drivers build config 00:04:57.676 net/enetfec: not in enabled drivers build config 00:04:57.676 net/enic: not in enabled drivers build config 00:04:57.676 net/failsafe: not in enabled drivers build config 00:04:57.676 net/fm10k: not in enabled drivers build config 00:04:57.676 net/gve: not in enabled drivers build config 00:04:57.676 net/hinic: not in enabled drivers build config 00:04:57.676 net/hns3: not in enabled drivers build config 00:04:57.676 net/i40e: not in enabled drivers build config 00:04:57.676 net/iavf: not in enabled drivers build config 00:04:57.676 net/ice: not in enabled drivers build config 00:04:57.676 net/idpf: not in enabled drivers build config 00:04:57.676 net/igc: not in enabled drivers build config 00:04:57.676 net/ionic: not in enabled drivers build config 00:04:57.676 net/ipn3ke: not in enabled drivers build config 00:04:57.676 net/ixgbe: not in enabled drivers build config 00:04:57.676 net/mana: not in enabled drivers build config 00:04:57.676 net/memif: not in enabled drivers build config 00:04:57.676 net/mlx4: not in enabled drivers build config 00:04:57.676 net/mlx5: not in enabled drivers build config 00:04:57.676 net/mvneta: not in enabled drivers build config 00:04:57.676 net/mvpp2: not in enabled drivers build config 00:04:57.676 net/netvsc: not in enabled drivers build config 00:04:57.676 net/nfb: not in enabled drivers build config 00:04:57.676 net/nfp: not in enabled drivers build config 00:04:57.676 net/ngbe: not in enabled drivers build config 00:04:57.676 net/null: not in enabled drivers build config 00:04:57.676 net/octeontx: not in enabled drivers build config 00:04:57.676 net/octeon_ep: not in enabled drivers build config 00:04:57.676 net/pcap: not in enabled drivers build config 00:04:57.676 net/pfe: not in enabled drivers build config 00:04:57.676 net/qede: not in enabled drivers build config 00:04:57.676 net/ring: not in enabled drivers build config 00:04:57.676 net/sfc: not in enabled drivers build config 00:04:57.676 net/softnic: not in enabled drivers build config 00:04:57.676 net/tap: not in enabled drivers build config 00:04:57.676 net/thunderx: not in enabled drivers build config 00:04:57.676 net/txgbe: not in enabled drivers build config 00:04:57.676 net/vdev_netvsc: not in enabled drivers build config 00:04:57.676 net/vhost: not in enabled drivers build config 00:04:57.676 net/virtio: not in enabled drivers build config 00:04:57.676 net/vmxnet3: not in enabled drivers build config 00:04:57.676 raw/*: missing internal dependency, "rawdev" 00:04:57.676 crypto/armv8: not in enabled drivers build config 00:04:57.676 crypto/bcmfs: not in enabled drivers build config 00:04:57.676 crypto/caam_jr: not in enabled drivers build config 00:04:57.676 crypto/ccp: not in enabled drivers build config 00:04:57.676 crypto/cnxk: not in enabled drivers build config 00:04:57.676 crypto/dpaa_sec: not in enabled drivers build config 00:04:57.676 crypto/dpaa2_sec: not in enabled drivers build config 00:04:57.676 crypto/ipsec_mb: not in enabled drivers build config 00:04:57.676 crypto/mlx5: not in enabled drivers build config 00:04:57.676 crypto/mvsam: not in enabled drivers build config 00:04:57.676 crypto/nitrox: not in enabled drivers build config 00:04:57.676 crypto/null: not in enabled drivers build config 00:04:57.676 crypto/octeontx: not in enabled drivers build config 00:04:57.676 crypto/openssl: not in enabled drivers build config 00:04:57.676 crypto/scheduler: not in enabled drivers build config 00:04:57.676 crypto/uadk: not in enabled drivers build config 00:04:57.676 crypto/virtio: not in enabled drivers build config 00:04:57.676 compress/isal: not in enabled drivers build config 00:04:57.676 compress/mlx5: not in enabled drivers build config 00:04:57.676 compress/octeontx: not in enabled drivers build config 00:04:57.676 compress/zlib: not in enabled drivers build config 00:04:57.676 regex/*: missing internal dependency, "regexdev" 00:04:57.676 ml/*: missing internal dependency, "mldev" 00:04:57.676 vdpa/ifc: not in enabled drivers build config 00:04:57.676 vdpa/mlx5: not in enabled drivers build config 00:04:57.676 vdpa/nfp: not in enabled drivers build config 00:04:57.676 vdpa/sfc: not in enabled drivers build config 00:04:57.676 event/*: missing internal dependency, "eventdev" 00:04:57.676 baseband/*: missing internal dependency, "bbdev" 00:04:57.676 gpu/*: missing internal dependency, "gpudev" 00:04:57.676 00:04:57.676 00:04:57.676 Build targets in project: 85 00:04:57.676 00:04:57.676 DPDK 23.11.0 00:04:57.676 00:04:57.676 User defined options 00:04:57.676 buildtype : debug 00:04:57.676 default_library : static 00:04:57.676 libdir : lib 00:04:57.676 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:57.676 b_sanitize : address 00:04:57.676 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:04:57.676 c_link_args : 00:04:57.676 cpu_instruction_set: native 00:04:57.676 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:04:57.676 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:04:57.676 enable_docs : false 00:04:57.676 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:57.676 enable_kmods : false 00:04:57.676 tests : false 00:04:57.676 00:04:57.676 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:57.676 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:57.676 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:57.676 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:57.676 [3/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:57.676 [4/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:57.676 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:57.676 [6/265] Linking static target lib/librte_log.a 00:04:57.676 [7/265] Linking static target lib/librte_kvargs.a 00:04:57.676 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:57.676 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:57.676 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:57.676 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:57.676 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:57.676 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:57.676 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:57.676 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:57.676 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:57.676 [17/265] Linking static target lib/librte_telemetry.a 00:04:57.676 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:57.936 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:57.936 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:57.936 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:57.936 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:57.936 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:57.936 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:57.936 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:58.196 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:58.196 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:58.196 [28/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.196 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:58.516 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:58.516 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:58.516 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:58.516 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:58.516 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:58.516 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:58.516 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:58.516 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:58.516 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:58.516 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:58.776 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:58.776 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:58.776 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:59.036 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:59.036 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:59.036 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:59.036 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:59.036 [47/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.036 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:59.295 [49/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:59.295 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:59.295 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:59.295 [52/265] Linking target lib/librte_log.so.24.0 00:04:59.295 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:59.295 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:59.295 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:59.295 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:59.295 [57/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.295 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:59.295 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:59.552 [60/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:04:59.552 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:59.552 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:59.552 [63/265] Linking target lib/librte_telemetry.so.24.0 00:04:59.552 [64/265] Linking target lib/librte_kvargs.so.24.0 00:04:59.810 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:59.810 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:59.810 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:59.810 [68/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:04:59.810 [69/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:04:59.810 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:59.810 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:59.810 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:59.810 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:00.068 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:00.068 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:00.068 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:00.068 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:00.068 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:00.068 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:00.068 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:00.327 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:00.327 [82/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:00.327 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:00.327 [84/265] Linking static target lib/librte_ring.a 00:05:00.327 [85/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:00.327 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:00.587 [87/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:00.587 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:00.587 [89/265] Linking static target lib/librte_eal.a 00:05:00.587 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:00.587 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:00.845 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:00.845 [93/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:00.845 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:00.845 [95/265] Linking static target lib/librte_rcu.a 00:05:00.845 [96/265] Linking static target lib/librte_mempool.a 00:05:00.845 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:01.104 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:01.104 [99/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.104 [100/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:01.104 [101/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:01.104 [102/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:01.362 [103/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:01.362 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:01.362 [105/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:01.622 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:01.622 [107/265] Linking static target lib/librte_meter.a 00:05:01.622 [108/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:01.622 [109/265] Linking static target lib/librte_net.a 00:05:01.622 [110/265] Linking static target lib/librte_mbuf.a 00:05:01.622 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:01.622 [112/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.622 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:01.882 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:02.142 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:02.142 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.142 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.402 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:02.402 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:02.402 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:02.662 [121/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.922 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:02.922 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:02.922 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:02.922 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:02.922 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:02.922 [127/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:02.922 [128/265] Linking static target lib/librte_pci.a 00:05:02.922 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:02.922 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:02.922 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:03.182 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:03.182 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:03.182 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:03.182 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:03.182 [136/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.182 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:03.182 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:03.441 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:03.441 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:03.441 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:03.441 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:03.441 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:03.441 [144/265] Linking static target lib/librte_cmdline.a 00:05:03.441 [145/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.441 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:03.699 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:03.699 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:03.699 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:03.699 [150/265] Linking static target lib/librte_timer.a 00:05:03.699 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:03.699 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:03.958 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:03.958 [154/265] Linking static target lib/librte_ethdev.a 00:05:03.958 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:03.958 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:03.958 [157/265] Linking static target lib/librte_compressdev.a 00:05:03.958 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:04.216 [159/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:04.216 [160/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:04.216 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:04.216 [162/265] Linking static target lib/librte_dmadev.a 00:05:04.216 [163/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:04.216 [164/265] Linking static target lib/librte_hash.a 00:05:04.475 [165/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.475 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:04.475 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:04.475 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:04.475 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:04.734 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.734 [171/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.734 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:04.734 [173/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.993 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:04.993 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:04.993 [176/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:04.993 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:04.993 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:04.993 [179/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.993 [180/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:04.993 [181/265] Linking static target lib/librte_cryptodev.a 00:05:04.993 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:04.993 [183/265] Linking static target lib/librte_power.a 00:05:05.253 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:05.253 [185/265] Linking static target lib/librte_reorder.a 00:05:05.253 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:05.253 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:05.253 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:05.512 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:05.512 [190/265] Linking static target lib/librte_security.a 00:05:05.771 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.771 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:05.771 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.031 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.031 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:06.031 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:06.031 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:06.031 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:06.289 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:06.289 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:06.289 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:06.289 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:06.289 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:06.548 [204/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:06.548 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:06.548 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:06.548 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:06.548 [208/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:06.548 [209/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.548 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:06.548 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:06.549 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:06.549 [213/265] Linking static target drivers/librte_bus_vdev.a 00:05:06.805 [214/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:06.805 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:06.805 [216/265] Linking static target drivers/librte_bus_pci.a 00:05:06.805 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:06.805 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:07.064 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.064 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:07.064 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:07.064 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:07.064 [223/265] Linking static target drivers/librte_mempool_ring.a 00:05:07.632 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.569 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:12.761 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:12.761 [227/265] Linking static target lib/librte_vhost.a 00:05:13.020 [228/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.957 [229/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.957 [230/265] Linking target lib/librte_eal.so.24.0 00:05:14.217 [231/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:05:14.217 [232/265] Linking target drivers/librte_bus_vdev.so.24.0 00:05:14.217 [233/265] Linking target lib/librte_meter.so.24.0 00:05:14.217 [234/265] Linking target lib/librte_pci.so.24.0 00:05:14.217 [235/265] Linking target lib/librte_ring.so.24.0 00:05:14.217 [236/265] Linking target lib/librte_timer.so.24.0 00:05:14.217 [237/265] Linking target lib/librte_dmadev.so.24.0 00:05:14.476 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:05:14.476 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:05:14.476 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:05:14.476 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:05:14.476 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:05:14.476 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:05:14.476 [244/265] Linking target lib/librte_rcu.so.24.0 00:05:14.476 [245/265] Linking target lib/librte_mempool.so.24.0 00:05:14.735 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:05:14.735 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:05:14.735 [248/265] Linking target drivers/librte_mempool_ring.so.24.0 00:05:14.735 [249/265] Linking target lib/librte_mbuf.so.24.0 00:05:14.735 [250/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.735 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:05:14.994 [252/265] Linking target lib/librte_reorder.so.24.0 00:05:14.994 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:05:14.994 [254/265] Linking target lib/librte_compressdev.so.24.0 00:05:14.994 [255/265] Linking target lib/librte_net.so.24.0 00:05:14.994 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:05:14.994 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:05:15.252 [258/265] Linking target lib/librte_cmdline.so.24.0 00:05:15.252 [259/265] Linking target lib/librte_hash.so.24.0 00:05:15.252 [260/265] Linking target lib/librte_security.so.24.0 00:05:15.252 [261/265] Linking target lib/librte_ethdev.so.24.0 00:05:15.252 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:05:15.252 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:05:15.512 [264/265] Linking target lib/librte_power.so.24.0 00:05:15.512 [265/265] Linking target lib/librte_vhost.so.24.0 00:05:15.512 INFO: autodetecting backend as ninja 00:05:15.512 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:16.453 CC lib/ut/ut.o 00:05:16.453 CC lib/log/log.o 00:05:16.453 CC lib/log/log_flags.o 00:05:16.453 CC lib/log/log_deprecated.o 00:05:16.453 CC lib/ut_mock/mock.o 00:05:16.712 LIB libspdk_log.a 00:05:16.712 LIB libspdk_ut_mock.a 00:05:16.712 LIB libspdk_ut.a 00:05:16.971 CC lib/ioat/ioat.o 00:05:16.971 CC lib/dma/dma.o 00:05:16.971 CC lib/util/bit_array.o 00:05:16.971 CC lib/util/crc16.o 00:05:16.971 CC lib/util/cpuset.o 00:05:16.971 CC lib/util/base64.o 00:05:16.971 CC lib/util/crc32.o 00:05:16.971 CC lib/util/crc32c.o 00:05:16.971 CXX lib/trace_parser/trace.o 00:05:16.971 CC lib/vfio_user/host/vfio_user_pci.o 00:05:16.971 CC lib/util/crc32_ieee.o 00:05:16.971 CC lib/util/crc64.o 00:05:16.971 CC lib/util/dif.o 00:05:16.971 CC lib/vfio_user/host/vfio_user.o 00:05:17.230 CC lib/util/fd.o 00:05:17.230 LIB libspdk_dma.a 00:05:17.230 CC lib/util/file.o 00:05:17.230 CC lib/util/hexlify.o 00:05:17.230 CC lib/util/iov.o 00:05:17.230 CC lib/util/math.o 00:05:17.230 LIB libspdk_ioat.a 00:05:17.230 CC lib/util/pipe.o 00:05:17.230 CC lib/util/strerror_tls.o 00:05:17.230 CC lib/util/string.o 00:05:17.230 CC lib/util/uuid.o 00:05:17.230 LIB libspdk_vfio_user.a 00:05:17.230 CC lib/util/fd_group.o 00:05:17.230 CC lib/util/xor.o 00:05:17.230 CC lib/util/zipf.o 00:05:17.798 LIB libspdk_util.a 00:05:18.058 CC lib/json/json_parse.o 00:05:18.058 CC lib/json/json_util.o 00:05:18.058 CC lib/json/json_write.o 00:05:18.058 CC lib/idxd/idxd.o 00:05:18.058 CC lib/vmd/vmd.o 00:05:18.058 CC lib/idxd/idxd_user.o 00:05:18.058 CC lib/conf/conf.o 00:05:18.058 CC lib/rdma/common.o 00:05:18.058 CC lib/env_dpdk/env.o 00:05:18.316 CC lib/rdma/rdma_verbs.o 00:05:18.316 CC lib/vmd/led.o 00:05:18.316 CC lib/env_dpdk/memory.o 00:05:18.316 LIB libspdk_conf.a 00:05:18.316 LIB libspdk_json.a 00:05:18.316 CC lib/env_dpdk/pci.o 00:05:18.316 CC lib/env_dpdk/init.o 00:05:18.316 CC lib/env_dpdk/threads.o 00:05:18.316 CC lib/env_dpdk/pci_ioat.o 00:05:18.316 LIB libspdk_rdma.a 00:05:18.574 CC lib/env_dpdk/pci_virtio.o 00:05:18.574 CC lib/env_dpdk/pci_vmd.o 00:05:18.574 LIB libspdk_idxd.a 00:05:18.574 CC lib/jsonrpc/jsonrpc_server.o 00:05:18.574 CC lib/env_dpdk/pci_idxd.o 00:05:18.574 CC lib/env_dpdk/pci_event.o 00:05:18.574 CC lib/env_dpdk/sigbus_handler.o 00:05:18.574 LIB libspdk_vmd.a 00:05:18.574 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:18.574 LIB libspdk_trace_parser.a 00:05:18.574 CC lib/env_dpdk/pci_dpdk.o 00:05:18.834 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:18.834 CC lib/jsonrpc/jsonrpc_client.o 00:05:18.834 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:18.834 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:19.093 LIB libspdk_jsonrpc.a 00:05:19.351 CC lib/rpc/rpc.o 00:05:19.609 LIB libspdk_env_dpdk.a 00:05:19.609 LIB libspdk_rpc.a 00:05:19.868 CC lib/notify/notify.o 00:05:19.868 CC lib/trace/trace.o 00:05:19.868 CC lib/notify/notify_rpc.o 00:05:19.868 CC lib/trace/trace_flags.o 00:05:19.868 CC lib/trace/trace_rpc.o 00:05:19.868 CC lib/sock/sock.o 00:05:19.868 CC lib/sock/sock_rpc.o 00:05:19.868 LIB libspdk_notify.a 00:05:20.126 LIB libspdk_trace.a 00:05:20.126 LIB libspdk_sock.a 00:05:20.385 CC lib/thread/thread.o 00:05:20.385 CC lib/thread/iobuf.o 00:05:20.385 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:20.385 CC lib/nvme/nvme_ctrlr.o 00:05:20.385 CC lib/nvme/nvme_ns_cmd.o 00:05:20.385 CC lib/nvme/nvme_fabric.o 00:05:20.385 CC lib/nvme/nvme_ns.o 00:05:20.385 CC lib/nvme/nvme_pcie_common.o 00:05:20.385 CC lib/nvme/nvme_pcie.o 00:05:20.385 CC lib/nvme/nvme_qpair.o 00:05:20.644 CC lib/nvme/nvme.o 00:05:20.902 CC lib/nvme/nvme_quirks.o 00:05:21.161 CC lib/nvme/nvme_transport.o 00:05:21.161 CC lib/nvme/nvme_discovery.o 00:05:21.161 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:21.161 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:21.161 CC lib/nvme/nvme_tcp.o 00:05:21.424 CC lib/nvme/nvme_opal.o 00:05:21.424 CC lib/nvme/nvme_io_msg.o 00:05:21.424 CC lib/nvme/nvme_poll_group.o 00:05:21.424 CC lib/nvme/nvme_zns.o 00:05:21.682 CC lib/nvme/nvme_cuse.o 00:05:21.682 CC lib/nvme/nvme_vfio_user.o 00:05:21.682 CC lib/nvme/nvme_rdma.o 00:05:21.940 LIB libspdk_thread.a 00:05:22.197 CC lib/virtio/virtio.o 00:05:22.197 CC lib/virtio/virtio_vhost_user.o 00:05:22.197 CC lib/accel/accel.o 00:05:22.197 CC lib/init/json_config.o 00:05:22.197 CC lib/blob/blobstore.o 00:05:22.197 CC lib/blob/request.o 00:05:22.455 CC lib/blob/zeroes.o 00:05:22.455 CC lib/init/subsystem.o 00:05:22.455 CC lib/init/subsystem_rpc.o 00:05:22.455 CC lib/virtio/virtio_vfio_user.o 00:05:22.455 CC lib/virtio/virtio_pci.o 00:05:22.455 CC lib/blob/blob_bs_dev.o 00:05:22.713 CC lib/accel/accel_rpc.o 00:05:22.713 CC lib/init/rpc.o 00:05:22.713 CC lib/accel/accel_sw.o 00:05:22.713 LIB libspdk_init.a 00:05:22.970 LIB libspdk_virtio.a 00:05:22.970 CC lib/event/reactor.o 00:05:22.970 CC lib/event/app.o 00:05:22.970 CC lib/event/log_rpc.o 00:05:22.970 CC lib/event/app_rpc.o 00:05:22.970 CC lib/event/scheduler_static.o 00:05:23.228 LIB libspdk_nvme.a 00:05:23.228 LIB libspdk_accel.a 00:05:23.485 LIB libspdk_event.a 00:05:23.485 CC lib/bdev/bdev.o 00:05:23.485 CC lib/bdev/bdev_zone.o 00:05:23.485 CC lib/bdev/bdev_rpc.o 00:05:23.485 CC lib/bdev/part.o 00:05:23.485 CC lib/bdev/scsi_nvme.o 00:05:25.439 LIB libspdk_blob.a 00:05:25.697 CC lib/lvol/lvol.o 00:05:25.697 CC lib/blobfs/blobfs.o 00:05:25.697 CC lib/blobfs/tree.o 00:05:26.633 LIB libspdk_bdev.a 00:05:26.633 LIB libspdk_blobfs.a 00:05:26.633 LIB libspdk_lvol.a 00:05:26.633 CC lib/ftl/ftl_core.o 00:05:26.633 CC lib/ftl/ftl_layout.o 00:05:26.633 CC lib/ftl/ftl_init.o 00:05:26.633 CC lib/ftl/ftl_debug.o 00:05:26.633 CC lib/ftl/ftl_io.o 00:05:26.633 CC lib/ftl/ftl_sb.o 00:05:26.633 CC lib/nvmf/ctrlr.o 00:05:26.633 CC lib/nbd/nbd.o 00:05:26.633 CC lib/scsi/dev.o 00:05:26.892 CC lib/nbd/nbd_rpc.o 00:05:26.892 CC lib/ftl/ftl_l2p.o 00:05:26.892 CC lib/ftl/ftl_l2p_flat.o 00:05:26.892 CC lib/nvmf/ctrlr_discovery.o 00:05:26.892 CC lib/scsi/lun.o 00:05:27.150 CC lib/scsi/port.o 00:05:27.150 CC lib/scsi/scsi.o 00:05:27.150 CC lib/scsi/scsi_bdev.o 00:05:27.150 CC lib/scsi/scsi_pr.o 00:05:27.150 CC lib/ftl/ftl_nv_cache.o 00:05:27.150 CC lib/ftl/ftl_band.o 00:05:27.150 CC lib/scsi/scsi_rpc.o 00:05:27.410 CC lib/nvmf/ctrlr_bdev.o 00:05:27.410 CC lib/ftl/ftl_band_ops.o 00:05:27.410 CC lib/scsi/task.o 00:05:27.410 CC lib/ftl/ftl_writer.o 00:05:27.410 LIB libspdk_nbd.a 00:05:27.410 CC lib/ftl/ftl_rq.o 00:05:27.410 CC lib/nvmf/subsystem.o 00:05:27.670 CC lib/nvmf/nvmf.o 00:05:27.670 LIB libspdk_scsi.a 00:05:27.670 CC lib/nvmf/nvmf_rpc.o 00:05:27.670 CC lib/ftl/ftl_reloc.o 00:05:27.670 CC lib/ftl/ftl_l2p_cache.o 00:05:27.670 CC lib/iscsi/conn.o 00:05:27.670 CC lib/vhost/vhost.o 00:05:27.929 CC lib/vhost/vhost_rpc.o 00:05:28.188 CC lib/nvmf/transport.o 00:05:28.188 CC lib/nvmf/tcp.o 00:05:28.188 CC lib/ftl/ftl_p2l.o 00:05:28.188 CC lib/ftl/mngt/ftl_mngt.o 00:05:28.448 CC lib/nvmf/rdma.o 00:05:28.448 CC lib/iscsi/init_grp.o 00:05:28.448 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:28.448 CC lib/vhost/vhost_scsi.o 00:05:28.448 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:28.448 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:28.720 CC lib/vhost/vhost_blk.o 00:05:28.720 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:28.720 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:28.720 CC lib/vhost/rte_vhost_user.o 00:05:28.720 CC lib/iscsi/iscsi.o 00:05:28.720 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:28.720 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:28.979 CC lib/iscsi/md5.o 00:05:28.979 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:28.979 CC lib/iscsi/param.o 00:05:28.979 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:28.979 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:29.237 CC lib/iscsi/portal_grp.o 00:05:29.237 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:29.237 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:29.237 CC lib/iscsi/tgt_node.o 00:05:29.237 CC lib/ftl/utils/ftl_conf.o 00:05:29.496 CC lib/ftl/utils/ftl_md.o 00:05:29.496 CC lib/ftl/utils/ftl_mempool.o 00:05:29.496 CC lib/iscsi/iscsi_subsystem.o 00:05:29.496 CC lib/ftl/utils/ftl_bitmap.o 00:05:29.496 LIB libspdk_vhost.a 00:05:29.755 CC lib/ftl/utils/ftl_property.o 00:05:29.755 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:29.755 CC lib/iscsi/iscsi_rpc.o 00:05:29.755 CC lib/iscsi/task.o 00:05:29.755 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:29.755 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:29.755 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:29.755 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:29.755 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:30.015 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:30.015 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:30.015 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:30.015 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:30.015 CC lib/ftl/base/ftl_base_dev.o 00:05:30.015 CC lib/ftl/base/ftl_base_bdev.o 00:05:30.015 CC lib/ftl/ftl_trace.o 00:05:30.274 LIB libspdk_iscsi.a 00:05:30.274 LIB libspdk_ftl.a 00:05:30.842 LIB libspdk_nvmf.a 00:05:31.102 CC module/env_dpdk/env_dpdk_rpc.o 00:05:31.360 CC module/blob/bdev/blob_bdev.o 00:05:31.360 CC module/accel/ioat/accel_ioat.o 00:05:31.360 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:31.360 CC module/accel/error/accel_error.o 00:05:31.360 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:31.360 CC module/accel/dsa/accel_dsa.o 00:05:31.360 CC module/accel/iaa/accel_iaa.o 00:05:31.361 CC module/scheduler/gscheduler/gscheduler.o 00:05:31.361 CC module/sock/posix/posix.o 00:05:31.361 LIB libspdk_env_dpdk_rpc.a 00:05:31.361 CC module/accel/dsa/accel_dsa_rpc.o 00:05:31.361 LIB libspdk_scheduler_dpdk_governor.a 00:05:31.361 LIB libspdk_scheduler_gscheduler.a 00:05:31.361 CC module/accel/ioat/accel_ioat_rpc.o 00:05:31.361 LIB libspdk_scheduler_dynamic.a 00:05:31.619 CC module/accel/iaa/accel_iaa_rpc.o 00:05:31.619 CC module/accel/error/accel_error_rpc.o 00:05:31.878 LIB libspdk_accel_ioat.a 00:05:31.878 LIB libspdk_accel_dsa.a 00:05:31.878 LIB libspdk_blob_bdev.a 00:05:31.878 LIB libspdk_accel_iaa.a 00:05:31.878 LIB libspdk_accel_error.a 00:05:31.878 CC module/bdev/lvol/vbdev_lvol.o 00:05:32.137 CC module/bdev/malloc/bdev_malloc.o 00:05:32.137 CC module/bdev/null/bdev_null.o 00:05:32.137 CC module/bdev/gpt/gpt.o 00:05:32.137 CC module/bdev/error/vbdev_error.o 00:05:32.137 CC module/bdev/delay/vbdev_delay.o 00:05:32.137 CC module/bdev/nvme/bdev_nvme.o 00:05:32.137 CC module/bdev/passthru/vbdev_passthru.o 00:05:32.137 CC module/blobfs/bdev/blobfs_bdev.o 00:05:32.137 CC module/bdev/gpt/vbdev_gpt.o 00:05:32.137 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:32.137 LIB libspdk_sock_posix.a 00:05:32.137 CC module/bdev/null/bdev_null_rpc.o 00:05:32.395 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:32.395 LIB libspdk_blobfs_bdev.a 00:05:32.395 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:32.395 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:32.395 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:32.395 CC module/bdev/nvme/nvme_rpc.o 00:05:32.395 CC module/bdev/error/vbdev_error_rpc.o 00:05:32.395 LIB libspdk_bdev_gpt.a 00:05:32.395 LIB libspdk_bdev_delay.a 00:05:32.395 LIB libspdk_bdev_null.a 00:05:32.395 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:32.395 CC module/bdev/nvme/bdev_mdns_client.o 00:05:32.395 CC module/bdev/nvme/vbdev_opal.o 00:05:32.395 LIB libspdk_bdev_passthru.a 00:05:32.654 LIB libspdk_bdev_malloc.a 00:05:32.654 CC module/bdev/raid/bdev_raid.o 00:05:32.654 LIB libspdk_bdev_error.a 00:05:32.654 CC module/bdev/raid/bdev_raid_rpc.o 00:05:32.654 CC module/bdev/raid/bdev_raid_sb.o 00:05:32.654 CC module/bdev/split/vbdev_split.o 00:05:32.654 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:32.654 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:32.654 CC module/bdev/split/vbdev_split_rpc.o 00:05:32.654 LIB libspdk_bdev_lvol.a 00:05:32.912 CC module/bdev/raid/raid0.o 00:05:32.912 CC module/bdev/raid/raid1.o 00:05:32.912 CC module/bdev/raid/concat.o 00:05:32.912 CC module/bdev/aio/bdev_aio.o 00:05:32.912 LIB libspdk_bdev_split.a 00:05:32.912 CC module/bdev/ftl/bdev_ftl.o 00:05:32.912 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:32.912 CC module/bdev/raid/raid5f.o 00:05:32.912 LIB libspdk_bdev_zone_block.a 00:05:33.171 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:33.171 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:33.171 CC module/bdev/iscsi/bdev_iscsi.o 00:05:33.171 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:33.171 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:33.171 CC module/bdev/aio/bdev_aio_rpc.o 00:05:33.171 LIB libspdk_bdev_ftl.a 00:05:33.171 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:33.171 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:33.430 LIB libspdk_bdev_aio.a 00:05:33.430 LIB libspdk_bdev_iscsi.a 00:05:33.430 LIB libspdk_bdev_raid.a 00:05:33.688 LIB libspdk_bdev_virtio.a 00:05:34.255 LIB libspdk_bdev_nvme.a 00:05:34.824 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:34.824 CC module/event/subsystems/vmd/vmd.o 00:05:34.824 CC module/event/subsystems/iobuf/iobuf.o 00:05:34.824 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:34.824 CC module/event/subsystems/sock/sock.o 00:05:34.824 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:34.824 CC module/event/subsystems/scheduler/scheduler.o 00:05:34.824 LIB libspdk_event_vhost_blk.a 00:05:34.824 LIB libspdk_event_sock.a 00:05:34.824 LIB libspdk_event_vmd.a 00:05:34.824 LIB libspdk_event_iobuf.a 00:05:34.824 LIB libspdk_event_scheduler.a 00:05:35.082 CC module/event/subsystems/accel/accel.o 00:05:35.341 LIB libspdk_event_accel.a 00:05:35.629 CC module/event/subsystems/bdev/bdev.o 00:05:35.629 LIB libspdk_event_bdev.a 00:05:35.915 CC module/event/subsystems/scsi/scsi.o 00:05:35.915 CC module/event/subsystems/nbd/nbd.o 00:05:35.915 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:35.915 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:36.174 LIB libspdk_event_scsi.a 00:05:36.174 LIB libspdk_event_nbd.a 00:05:36.174 LIB libspdk_event_nvmf.a 00:05:36.433 CC module/event/subsystems/iscsi/iscsi.o 00:05:36.433 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:36.433 LIB libspdk_event_vhost_scsi.a 00:05:36.695 LIB libspdk_event_iscsi.a 00:05:36.695 TEST_HEADER include/spdk/accel.h 00:05:36.695 TEST_HEADER include/spdk/accel_module.h 00:05:36.695 CXX app/trace/trace.o 00:05:36.695 TEST_HEADER include/spdk/assert.h 00:05:36.695 TEST_HEADER include/spdk/barrier.h 00:05:36.695 TEST_HEADER include/spdk/base64.h 00:05:36.695 TEST_HEADER include/spdk/bdev.h 00:05:36.695 TEST_HEADER include/spdk/bdev_module.h 00:05:36.695 TEST_HEADER include/spdk/bdev_zone.h 00:05:36.695 TEST_HEADER include/spdk/bit_array.h 00:05:36.695 TEST_HEADER include/spdk/bit_pool.h 00:05:36.695 TEST_HEADER include/spdk/blob.h 00:05:36.695 TEST_HEADER include/spdk/blob_bdev.h 00:05:36.956 TEST_HEADER include/spdk/blobfs.h 00:05:36.956 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:36.956 TEST_HEADER include/spdk/conf.h 00:05:36.956 CC test/event/event_perf/event_perf.o 00:05:36.956 TEST_HEADER include/spdk/config.h 00:05:36.956 TEST_HEADER include/spdk/cpuset.h 00:05:36.956 TEST_HEADER include/spdk/crc16.h 00:05:36.956 CC examples/accel/perf/accel_perf.o 00:05:36.956 TEST_HEADER include/spdk/crc32.h 00:05:36.956 TEST_HEADER include/spdk/crc64.h 00:05:36.956 TEST_HEADER include/spdk/dif.h 00:05:36.956 TEST_HEADER include/spdk/dma.h 00:05:36.956 TEST_HEADER include/spdk/endian.h 00:05:36.956 TEST_HEADER include/spdk/env.h 00:05:36.956 TEST_HEADER include/spdk/env_dpdk.h 00:05:36.956 TEST_HEADER include/spdk/event.h 00:05:36.956 CC test/blobfs/mkfs/mkfs.o 00:05:36.956 TEST_HEADER include/spdk/fd.h 00:05:36.956 TEST_HEADER include/spdk/fd_group.h 00:05:36.956 TEST_HEADER include/spdk/file.h 00:05:36.956 TEST_HEADER include/spdk/ftl.h 00:05:36.956 CC test/app/bdev_svc/bdev_svc.o 00:05:36.956 CC test/dma/test_dma/test_dma.o 00:05:36.956 TEST_HEADER include/spdk/gpt_spec.h 00:05:36.956 CC test/bdev/bdevio/bdevio.o 00:05:36.956 TEST_HEADER include/spdk/hexlify.h 00:05:36.956 TEST_HEADER include/spdk/histogram_data.h 00:05:36.956 TEST_HEADER include/spdk/idxd.h 00:05:36.956 TEST_HEADER include/spdk/idxd_spec.h 00:05:36.956 TEST_HEADER include/spdk/init.h 00:05:36.956 TEST_HEADER include/spdk/ioat.h 00:05:36.956 TEST_HEADER include/spdk/ioat_spec.h 00:05:36.956 TEST_HEADER include/spdk/iscsi_spec.h 00:05:36.956 TEST_HEADER include/spdk/json.h 00:05:36.956 TEST_HEADER include/spdk/jsonrpc.h 00:05:36.957 TEST_HEADER include/spdk/likely.h 00:05:36.957 TEST_HEADER include/spdk/log.h 00:05:36.957 TEST_HEADER include/spdk/lvol.h 00:05:36.957 TEST_HEADER include/spdk/memory.h 00:05:36.957 CC test/accel/dif/dif.o 00:05:36.957 CC test/env/mem_callbacks/mem_callbacks.o 00:05:36.957 TEST_HEADER include/spdk/mmio.h 00:05:36.957 TEST_HEADER include/spdk/nbd.h 00:05:36.957 TEST_HEADER include/spdk/notify.h 00:05:36.957 TEST_HEADER include/spdk/nvme.h 00:05:36.957 TEST_HEADER include/spdk/nvme_intel.h 00:05:36.957 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:36.957 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:36.957 TEST_HEADER include/spdk/nvme_spec.h 00:05:36.957 TEST_HEADER include/spdk/nvme_zns.h 00:05:36.957 TEST_HEADER include/spdk/nvmf.h 00:05:36.957 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:36.957 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:36.957 TEST_HEADER include/spdk/nvmf_spec.h 00:05:36.957 TEST_HEADER include/spdk/nvmf_transport.h 00:05:36.957 TEST_HEADER include/spdk/opal.h 00:05:36.957 TEST_HEADER include/spdk/opal_spec.h 00:05:36.957 TEST_HEADER include/spdk/pci_ids.h 00:05:36.957 TEST_HEADER include/spdk/pipe.h 00:05:36.957 TEST_HEADER include/spdk/queue.h 00:05:36.957 TEST_HEADER include/spdk/reduce.h 00:05:36.957 TEST_HEADER include/spdk/rpc.h 00:05:36.957 TEST_HEADER include/spdk/scheduler.h 00:05:36.957 TEST_HEADER include/spdk/scsi.h 00:05:36.957 TEST_HEADER include/spdk/scsi_spec.h 00:05:36.957 TEST_HEADER include/spdk/sock.h 00:05:36.957 TEST_HEADER include/spdk/stdinc.h 00:05:36.957 TEST_HEADER include/spdk/string.h 00:05:36.957 TEST_HEADER include/spdk/thread.h 00:05:36.957 TEST_HEADER include/spdk/trace.h 00:05:36.957 TEST_HEADER include/spdk/trace_parser.h 00:05:36.957 TEST_HEADER include/spdk/tree.h 00:05:36.957 TEST_HEADER include/spdk/ublk.h 00:05:36.957 TEST_HEADER include/spdk/util.h 00:05:36.957 TEST_HEADER include/spdk/uuid.h 00:05:36.957 TEST_HEADER include/spdk/version.h 00:05:36.957 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:36.957 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:36.957 TEST_HEADER include/spdk/vhost.h 00:05:36.957 TEST_HEADER include/spdk/vmd.h 00:05:36.957 TEST_HEADER include/spdk/xor.h 00:05:36.957 TEST_HEADER include/spdk/zipf.h 00:05:36.957 CXX test/cpp_headers/accel.o 00:05:36.957 LINK event_perf 00:05:36.957 LINK mkfs 00:05:36.957 LINK bdev_svc 00:05:37.215 CXX test/cpp_headers/accel_module.o 00:05:37.215 LINK test_dma 00:05:37.215 LINK bdevio 00:05:37.215 LINK spdk_trace 00:05:37.215 LINK dif 00:05:37.215 CXX test/cpp_headers/assert.o 00:05:37.215 LINK mem_callbacks 00:05:37.215 LINK accel_perf 00:05:37.474 CXX test/cpp_headers/barrier.o 00:05:37.474 CXX test/cpp_headers/base64.o 00:05:37.732 CC app/trace_record/trace_record.o 00:05:37.732 CXX test/cpp_headers/bdev.o 00:05:37.732 CC test/env/vtophys/vtophys.o 00:05:37.732 CC test/event/reactor/reactor.o 00:05:37.990 LINK spdk_trace_record 00:05:37.990 LINK vtophys 00:05:37.990 CXX test/cpp_headers/bdev_module.o 00:05:37.990 LINK reactor 00:05:37.990 CXX test/cpp_headers/bdev_zone.o 00:05:38.248 CXX test/cpp_headers/bit_array.o 00:05:38.248 CXX test/cpp_headers/bit_pool.o 00:05:38.506 CC examples/bdev/hello_world/hello_bdev.o 00:05:38.506 CC app/nvmf_tgt/nvmf_main.o 00:05:38.506 CXX test/cpp_headers/blob.o 00:05:38.506 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:38.506 LINK nvmf_tgt 00:05:38.764 LINK hello_bdev 00:05:38.764 CC test/event/reactor_perf/reactor_perf.o 00:05:38.764 CXX test/cpp_headers/blob_bdev.o 00:05:38.764 LINK env_dpdk_post_init 00:05:38.764 LINK reactor_perf 00:05:38.764 CXX test/cpp_headers/blobfs.o 00:05:39.076 CXX test/cpp_headers/blobfs_bdev.o 00:05:39.076 CXX test/cpp_headers/conf.o 00:05:39.334 CXX test/cpp_headers/config.o 00:05:39.334 CXX test/cpp_headers/cpuset.o 00:05:39.593 CXX test/cpp_headers/crc16.o 00:05:39.593 CC test/event/app_repeat/app_repeat.o 00:05:39.593 CXX test/cpp_headers/crc32.o 00:05:39.850 LINK app_repeat 00:05:39.850 CXX test/cpp_headers/crc64.o 00:05:39.850 CC test/env/memory/memory_ut.o 00:05:39.850 CXX test/cpp_headers/dif.o 00:05:40.108 CXX test/cpp_headers/dma.o 00:05:40.108 CXX test/cpp_headers/endian.o 00:05:40.108 CC test/event/scheduler/scheduler.o 00:05:40.108 CC examples/blob/hello_world/hello_blob.o 00:05:40.108 CXX test/cpp_headers/env.o 00:05:40.366 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:40.366 CC examples/nvme/hello_world/hello_world.o 00:05:40.366 LINK scheduler 00:05:40.366 CC examples/ioat/perf/perf.o 00:05:40.366 CXX test/cpp_headers/env_dpdk.o 00:05:40.366 LINK hello_blob 00:05:40.625 CXX test/cpp_headers/event.o 00:05:40.625 LINK hello_world 00:05:40.625 LINK memory_ut 00:05:40.625 LINK ioat_perf 00:05:40.625 LINK nvme_fuzz 00:05:40.625 CXX test/cpp_headers/fd.o 00:05:40.884 CXX test/cpp_headers/fd_group.o 00:05:40.884 CC test/app/histogram_perf/histogram_perf.o 00:05:40.884 CC test/env/pci/pci_ut.o 00:05:40.884 CXX test/cpp_headers/file.o 00:05:41.143 LINK histogram_perf 00:05:41.143 CXX test/cpp_headers/ftl.o 00:05:41.143 CC examples/ioat/verify/verify.o 00:05:41.402 LINK pci_ut 00:05:41.402 CXX test/cpp_headers/gpt_spec.o 00:05:41.402 LINK verify 00:05:41.402 CXX test/cpp_headers/hexlify.o 00:05:41.660 CXX test/cpp_headers/histogram_data.o 00:05:41.660 CC examples/bdev/bdevperf/bdevperf.o 00:05:41.660 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:41.660 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:41.660 CXX test/cpp_headers/idxd.o 00:05:41.919 CC examples/nvme/reconnect/reconnect.o 00:05:41.919 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:41.919 CC app/iscsi_tgt/iscsi_tgt.o 00:05:41.919 CXX test/cpp_headers/idxd_spec.o 00:05:41.919 CXX test/cpp_headers/init.o 00:05:41.919 CXX test/cpp_headers/ioat.o 00:05:42.177 LINK iscsi_tgt 00:05:42.177 CXX test/cpp_headers/ioat_spec.o 00:05:42.177 CC examples/blob/cli/blobcli.o 00:05:42.177 LINK reconnect 00:05:42.177 CC app/spdk_tgt/spdk_tgt.o 00:05:42.436 CXX test/cpp_headers/iscsi_spec.o 00:05:42.436 LINK vhost_fuzz 00:05:42.436 LINK bdevperf 00:05:42.436 CXX test/cpp_headers/json.o 00:05:42.436 LINK spdk_tgt 00:05:42.693 CXX test/cpp_headers/jsonrpc.o 00:05:42.693 LINK blobcli 00:05:42.693 CXX test/cpp_headers/likely.o 00:05:42.952 CXX test/cpp_headers/log.o 00:05:43.211 CXX test/cpp_headers/lvol.o 00:05:43.211 CXX test/cpp_headers/memory.o 00:05:43.470 CC test/app/jsoncat/jsoncat.o 00:05:43.470 CXX test/cpp_headers/mmio.o 00:05:43.470 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:43.470 LINK iscsi_fuzz 00:05:43.470 LINK jsoncat 00:05:43.470 CXX test/cpp_headers/nbd.o 00:05:43.729 CXX test/cpp_headers/notify.o 00:05:43.729 CXX test/cpp_headers/nvme.o 00:05:43.987 CXX test/cpp_headers/nvme_intel.o 00:05:43.987 LINK nvme_manage 00:05:43.987 CXX test/cpp_headers/nvme_ocssd.o 00:05:43.987 CC examples/sock/hello_world/hello_sock.o 00:05:44.346 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:44.346 CC test/lvol/esnap/esnap.o 00:05:44.346 CC test/app/stub/stub.o 00:05:44.346 LINK hello_sock 00:05:44.346 CXX test/cpp_headers/nvme_spec.o 00:05:44.346 LINK stub 00:05:44.607 CXX test/cpp_headers/nvme_zns.o 00:05:44.865 CXX test/cpp_headers/nvmf.o 00:05:44.865 CC app/spdk_lspci/spdk_lspci.o 00:05:45.123 CXX test/cpp_headers/nvmf_cmd.o 00:05:45.123 CC examples/nvme/arbitration/arbitration.o 00:05:45.123 LINK spdk_lspci 00:05:45.123 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:45.384 CXX test/cpp_headers/nvmf_spec.o 00:05:45.384 LINK arbitration 00:05:45.642 CXX test/cpp_headers/nvmf_transport.o 00:05:45.642 CC examples/vmd/lsvmd/lsvmd.o 00:05:45.642 CC examples/nvmf/nvmf/nvmf.o 00:05:45.642 CC app/spdk_nvme_perf/perf.o 00:05:45.900 CXX test/cpp_headers/opal.o 00:05:45.900 LINK lsvmd 00:05:45.900 CC app/spdk_nvme_identify/identify.o 00:05:45.900 CXX test/cpp_headers/opal_spec.o 00:05:45.900 LINK nvmf 00:05:46.159 CC examples/util/zipf/zipf.o 00:05:46.159 CXX test/cpp_headers/pci_ids.o 00:05:46.417 CC app/spdk_nvme_discover/discovery_aer.o 00:05:46.417 CC examples/thread/thread/thread_ex.o 00:05:46.417 LINK zipf 00:05:46.417 CXX test/cpp_headers/pipe.o 00:05:46.417 LINK spdk_nvme_perf 00:05:46.417 LINK spdk_nvme_discover 00:05:46.676 CXX test/cpp_headers/queue.o 00:05:46.676 LINK thread 00:05:46.676 LINK spdk_nvme_identify 00:05:46.676 CXX test/cpp_headers/reduce.o 00:05:46.676 CC examples/nvme/hotplug/hotplug.o 00:05:46.935 CXX test/cpp_headers/rpc.o 00:05:46.935 CXX test/cpp_headers/scheduler.o 00:05:46.935 LINK hotplug 00:05:46.935 CXX test/cpp_headers/scsi.o 00:05:47.194 CC examples/vmd/led/led.o 00:05:47.194 CC examples/idxd/perf/perf.o 00:05:47.194 CXX test/cpp_headers/scsi_spec.o 00:05:47.194 LINK led 00:05:47.452 CXX test/cpp_headers/sock.o 00:05:47.452 CXX test/cpp_headers/stdinc.o 00:05:47.452 CXX test/cpp_headers/string.o 00:05:47.711 LINK idxd_perf 00:05:47.711 CXX test/cpp_headers/thread.o 00:05:47.711 CXX test/cpp_headers/trace.o 00:05:47.711 CC app/spdk_top/spdk_top.o 00:05:47.711 CC app/vhost/vhost.o 00:05:48.005 CC app/spdk_dd/spdk_dd.o 00:05:48.005 CXX test/cpp_headers/trace_parser.o 00:05:48.005 LINK vhost 00:05:48.005 CXX test/cpp_headers/tree.o 00:05:48.005 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:48.280 CXX test/cpp_headers/ublk.o 00:05:48.280 CXX test/cpp_headers/util.o 00:05:48.280 LINK spdk_dd 00:05:48.280 LINK cmb_copy 00:05:48.280 CXX test/cpp_headers/uuid.o 00:05:48.280 CC app/fio/nvme/fio_plugin.o 00:05:48.540 CC app/fio/bdev/fio_plugin.o 00:05:48.540 CXX test/cpp_headers/version.o 00:05:48.540 CXX test/cpp_headers/vfio_user_pci.o 00:05:48.540 LINK spdk_top 00:05:48.799 CXX test/cpp_headers/vfio_user_spec.o 00:05:48.799 CXX test/cpp_headers/vhost.o 00:05:48.799 LINK spdk_bdev 00:05:49.058 LINK spdk_nvme 00:05:49.058 CXX test/cpp_headers/vmd.o 00:05:49.058 CXX test/cpp_headers/xor.o 00:05:49.058 CXX test/cpp_headers/zipf.o 00:05:49.058 CC examples/nvme/abort/abort.o 00:05:49.058 LINK esnap 00:05:49.318 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:49.318 CC test/nvme/aer/aer.o 00:05:49.318 CC test/nvme/reset/reset.o 00:05:49.318 LINK pmr_persistence 00:05:49.577 LINK abort 00:05:49.577 LINK aer 00:05:49.577 LINK reset 00:05:49.837 CC test/nvme/sgl/sgl.o 00:05:50.096 LINK sgl 00:05:50.096 CC test/nvme/e2edp/nvme_dp.o 00:05:50.355 CC test/nvme/overhead/overhead.o 00:05:50.355 LINK nvme_dp 00:05:50.613 LINK overhead 00:05:50.613 CC test/nvme/err_injection/err_injection.o 00:05:50.872 CC test/nvme/startup/startup.o 00:05:50.872 LINK err_injection 00:05:50.872 CC test/nvme/reserve/reserve.o 00:05:50.872 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:51.132 LINK reserve 00:05:51.132 LINK startup 00:05:51.132 LINK interrupt_tgt 00:05:51.132 CC test/rpc_client/rpc_client_test.o 00:05:51.392 CC test/thread/poller_perf/poller_perf.o 00:05:51.392 CC test/nvme/simple_copy/simple_copy.o 00:05:51.392 LINK rpc_client_test 00:05:51.392 LINK poller_perf 00:05:51.650 CC test/nvme/connect_stress/connect_stress.o 00:05:51.650 CC test/nvme/boot_partition/boot_partition.o 00:05:51.909 CC test/nvme/compliance/nvme_compliance.o 00:05:51.909 CC test/nvme/fused_ordering/fused_ordering.o 00:05:52.169 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:52.169 LINK simple_copy 00:05:52.169 LINK boot_partition 00:05:52.169 LINK connect_stress 00:05:52.169 LINK fused_ordering 00:05:52.169 CC test/nvme/fdp/fdp.o 00:05:52.428 CC test/thread/lock/spdk_lock.o 00:05:52.428 CC test/nvme/cuse/cuse.o 00:05:52.428 LINK doorbell_aers 00:05:52.428 LINK nvme_compliance 00:05:52.689 LINK fdp 00:05:52.947 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:05:53.207 LINK cuse 00:05:53.207 LINK histogram_ut 00:05:53.466 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:05:53.466 CC test/unit/lib/accel/accel.c/accel_ut.o 00:05:53.466 CC test/unit/lib/bdev/part.c/part_ut.o 00:05:53.466 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:05:53.466 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:05:53.466 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:05:53.466 CC test/unit/lib/blob/blob.c/blob_ut.o 00:05:53.466 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:05:53.725 LINK tree_ut 00:05:53.725 LINK scsi_nvme_ut 00:05:53.725 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:05:53.984 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:05:53.984 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:05:53.984 LINK spdk_lock 00:05:53.984 LINK blob_bdev_ut 00:05:54.244 LINK gpt_ut 00:05:54.502 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:05:54.502 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:05:54.502 LINK blobfs_bdev_ut 00:05:54.760 LINK blobfs_async_ut 00:05:54.760 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:05:54.760 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:05:54.760 LINK vbdev_lvol_ut 00:05:55.019 LINK blobfs_sync_ut 00:05:55.019 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:05:55.278 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:05:55.278 LINK bdev_raid_sb_ut 00:05:55.537 LINK bdev_zone_ut 00:05:55.537 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:05:55.537 LINK concat_ut 00:05:55.537 CC test/unit/lib/dma/dma.c/dma_ut.o 00:05:55.796 LINK accel_ut 00:05:55.796 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:05:55.796 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:05:55.796 LINK dma_ut 00:05:56.055 LINK raid1_ut 00:05:56.055 CC test/unit/lib/event/app.c/app_ut.o 00:05:56.313 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:05:56.313 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:05:56.570 LINK vbdev_zone_block_ut 00:05:56.570 LINK ioat_ut 00:05:56.570 LINK bdev_raid_ut 00:05:56.570 LINK part_ut 00:05:56.570 LINK app_ut 00:05:56.827 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:05:56.827 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:05:56.827 LINK raid5f_ut 00:05:56.827 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:05:57.086 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:05:57.086 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:05:57.086 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:05:57.345 LINK conn_ut 00:05:57.345 LINK json_util_ut 00:05:57.603 LINK jsonrpc_server_ut 00:05:57.603 CC test/unit/lib/log/log.c/log_ut.o 00:05:57.862 LINK json_write_ut 00:05:57.862 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:05:57.862 LINK reactor_ut 00:05:57.862 LINK bdev_ut 00:05:57.862 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:05:57.862 LINK log_ut 00:05:58.121 CC test/unit/lib/notify/notify.c/notify_ut.o 00:05:58.121 LINK init_grp_ut 00:05:58.121 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:05:58.121 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:05:58.380 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:05:58.380 LINK notify_ut 00:05:58.380 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:05:58.380 LINK bdev_ut 00:05:58.638 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:05:58.896 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:05:59.154 LINK json_parse_ut 00:05:59.413 LINK nvme_ut 00:05:59.413 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:05:59.413 LINK nvme_ctrlr_cmd_ut 00:05:59.672 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:05:59.672 LINK lvol_ut 00:05:59.672 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:05:59.932 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:06:00.191 LINK blob_ut 00:06:00.450 LINK nvme_ctrlr_ocssd_cmd_ut 00:06:00.450 LINK nvme_ns_ut 00:06:00.709 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:06:00.709 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:06:00.709 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:06:00.709 LINK iscsi_ut 00:06:00.968 LINK bdev_nvme_ut 00:06:00.968 LINK scsi_ut 00:06:00.968 LINK dev_ut 00:06:00.968 LINK nvme_ctrlr_ut 00:06:01.227 CC test/unit/lib/sock/sock.c/sock_ut.o 00:06:01.227 CC test/unit/lib/iscsi/param.c/param_ut.o 00:06:01.227 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:06:01.227 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:06:01.485 LINK lun_ut 00:06:01.485 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:06:01.485 LINK subsystem_ut 00:06:01.485 LINK nvme_ns_cmd_ut 00:06:01.485 LINK ctrlr_ut 00:06:01.744 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:06:01.744 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:06:02.003 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:06:02.003 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:06:02.003 LINK param_ut 00:06:02.003 LINK tgt_node_ut 00:06:02.261 LINK portal_grp_ut 00:06:02.261 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:06:02.520 LINK tcp_ut 00:06:02.520 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:06:02.520 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:06:02.520 LINK sock_ut 00:06:02.520 LINK scsi_bdev_ut 00:06:02.779 LINK nvme_poll_group_ut 00:06:02.779 CC test/unit/lib/sock/posix.c/posix_ut.o 00:06:02.779 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:06:03.039 LINK nvme_quirks_ut 00:06:03.039 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:06:03.039 LINK ctrlr_discovery_ut 00:06:03.039 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:06:03.039 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:06:03.298 LINK nvme_ns_ocssd_cmd_ut 00:06:03.298 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:06:03.298 LINK nvme_qpair_ut 00:06:03.298 LINK scsi_pr_ut 00:06:03.555 LINK nvme_pcie_ut 00:06:03.555 CC test/unit/lib/thread/thread.c/thread_ut.o 00:06:03.555 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:06:03.813 CC test/unit/lib/util/base64.c/base64_ut.o 00:06:03.813 LINK posix_ut 00:06:03.813 LINK nvme_io_msg_ut 00:06:03.813 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:06:03.813 LINK ctrlr_bdev_ut 00:06:03.813 LINK nvme_transport_ut 00:06:03.813 LINK base64_ut 00:06:04.070 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:06:04.070 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:06:04.070 LINK pci_event_ut 00:06:04.070 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:06:04.070 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:06:04.328 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:06:04.328 LINK iobuf_ut 00:06:04.328 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:06:04.586 LINK bit_array_ut 00:06:04.586 LINK subsystem_ut 00:06:04.586 LINK nvme_pcie_common_ut 00:06:04.586 LINK rpc_ut 00:06:04.586 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:06:04.844 LINK idxd_user_ut 00:06:04.844 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:06:04.844 LINK nvme_tcp_ut 00:06:04.844 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:06:04.844 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:06:04.844 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:06:05.101 LINK cpuset_ut 00:06:05.101 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:06:05.101 LINK nvmf_ut 00:06:05.101 LINK crc16_ut 00:06:05.359 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:06:05.359 CC test/unit/lib/rdma/common.c/common_ut.o 00:06:05.359 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:06:05.359 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:06:05.617 LINK crc32_ieee_ut 00:06:05.617 LINK nvme_fabric_ut 00:06:05.617 LINK crc32c_ut 00:06:05.617 LINK thread_ut 00:06:05.617 LINK ftl_l2p_ut 00:06:05.617 LINK idxd_ut 00:06:05.617 LINK common_ut 00:06:05.876 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:06:05.876 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:06:05.876 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:06:05.876 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:06:05.876 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:06:05.876 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:06:05.876 LINK crc64_ut 00:06:06.133 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:06:06.133 CC test/unit/lib/util/dif.c/dif_ut.o 00:06:06.134 LINK ftl_bitmap_ut 00:06:06.392 LINK ftl_io_ut 00:06:06.392 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:06:06.648 LINK nvme_opal_ut 00:06:06.648 LINK vhost_ut 00:06:06.906 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:06:06.906 CC test/unit/lib/util/iov.c/iov_ut.o 00:06:06.906 LINK ftl_mempool_ut 00:06:06.906 LINK ftl_band_ut 00:06:07.164 CC test/unit/lib/util/math.c/math_ut.o 00:06:07.164 LINK iov_ut 00:06:07.164 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:06:07.164 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:06:07.164 LINK math_ut 00:06:07.164 LINK nvme_cuse_ut 00:06:07.164 LINK dif_ut 00:06:07.422 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:06:07.422 LINK ftl_mngt_ut 00:06:07.422 LINK rdma_ut 00:06:07.422 LINK transport_ut 00:06:07.422 CC test/unit/lib/util/string.c/string_ut.o 00:06:07.422 CC test/unit/lib/util/xor.c/xor_ut.o 00:06:07.680 LINK nvme_rdma_ut 00:06:07.680 LINK pipe_ut 00:06:07.680 LINK string_ut 00:06:07.938 LINK xor_ut 00:06:08.506 LINK ftl_sb_ut 00:06:08.506 LINK ftl_layout_upgrade_ut 00:06:08.506 00:06:08.506 real 2m0.278s 00:06:08.506 user 9m23.279s 00:06:08.506 sys 2m28.891s 00:06:08.506 12:25:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:06:08.506 12:25:50 -- common/autotest_common.sh@10 -- $ set +x 00:06:08.506 ************************************ 00:06:08.506 END TEST unittest_build 00:06:08.506 ************************************ 00:06:08.767 12:25:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:08.767 12:25:51 -- nvmf/common.sh@7 -- # uname -s 00:06:08.767 12:25:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.767 12:25:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.768 12:25:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.768 12:25:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.768 12:25:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.768 12:25:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.768 12:25:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.768 12:25:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.768 12:25:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.768 12:25:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.768 12:25:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:552a9fae-6c3a-47ac-852b-737e800c6922 00:06:08.768 12:25:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=552a9fae-6c3a-47ac-852b-737e800c6922 00:06:08.768 12:25:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.768 12:25:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.768 12:25:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.768 12:25:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.768 12:25:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.768 12:25:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.768 12:25:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.768 12:25:51 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:08.768 12:25:51 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:08.768 12:25:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:08.768 12:25:51 -- paths/export.sh@5 -- # export PATH 00:06:08.768 12:25:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:08.768 12:25:51 -- nvmf/common.sh@46 -- # : 0 00:06:08.768 12:25:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:08.768 12:25:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:08.768 12:25:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:08.768 12:25:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.768 12:25:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.768 12:25:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:08.768 12:25:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:08.768 12:25:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:08.768 12:25:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:08.768 12:25:51 -- spdk/autotest.sh@32 -- # uname -s 00:06:08.768 12:25:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:08.768 12:25:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:06:08.768 12:25:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:08.768 12:25:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:08.768 12:25:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:08.768 12:25:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:08.768 12:25:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:08.768 12:25:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:06:08.768 12:25:51 -- spdk/autotest.sh@48 -- # udevadm_pid=92450 00:06:08.768 12:25:51 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:06:08.768 12:25:51 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:06:08.768 12:25:51 -- spdk/autotest.sh@54 -- # echo 92466 00:06:08.768 12:25:51 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:06:08.768 12:25:51 -- spdk/autotest.sh@56 -- # echo 92467 00:06:08.768 12:25:51 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:06:08.768 12:25:51 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:06:08.768 12:25:51 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:08.768 12:25:51 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:06:08.768 12:25:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:08.768 12:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:08.768 12:25:51 -- spdk/autotest.sh@70 -- # create_test_list 00:06:08.768 12:25:51 -- common/autotest_common.sh@736 -- # xtrace_disable 00:06:08.768 12:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:09.027 12:25:51 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:09.027 12:25:51 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:09.028 12:25:51 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:06:09.028 12:25:51 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:09.028 12:25:51 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:06:09.028 12:25:51 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:06:09.028 12:25:51 -- common/autotest_common.sh@1440 -- # uname 00:06:09.028 12:25:51 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:06:09.028 12:25:51 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:06:09.028 12:25:51 -- common/autotest_common.sh@1460 -- # uname 00:06:09.028 12:25:51 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:06:09.028 12:25:51 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:06:09.028 12:25:51 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:06:09.028 12:25:51 -- spdk/autotest.sh@83 -- # hash lcov 00:06:09.028 12:25:51 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:09.028 12:25:51 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:06:09.028 --rc lcov_branch_coverage=1 00:06:09.028 --rc lcov_function_coverage=1 00:06:09.028 --rc genhtml_branch_coverage=1 00:06:09.028 --rc genhtml_function_coverage=1 00:06:09.028 --rc genhtml_legend=1 00:06:09.028 --rc geninfo_all_blocks=1 00:06:09.028 ' 00:06:09.028 12:25:51 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:06:09.028 --rc lcov_branch_coverage=1 00:06:09.028 --rc lcov_function_coverage=1 00:06:09.028 --rc genhtml_branch_coverage=1 00:06:09.028 --rc genhtml_function_coverage=1 00:06:09.028 --rc genhtml_legend=1 00:06:09.028 --rc geninfo_all_blocks=1 00:06:09.028 ' 00:06:09.028 12:25:51 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:06:09.028 --rc lcov_branch_coverage=1 00:06:09.028 --rc lcov_function_coverage=1 00:06:09.028 --rc genhtml_branch_coverage=1 00:06:09.028 --rc genhtml_function_coverage=1 00:06:09.028 --rc genhtml_legend=1 00:06:09.028 --rc geninfo_all_blocks=1 00:06:09.028 --no-external' 00:06:09.028 12:25:51 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:06:09.028 --rc lcov_branch_coverage=1 00:06:09.028 --rc lcov_function_coverage=1 00:06:09.028 --rc genhtml_branch_coverage=1 00:06:09.028 --rc genhtml_function_coverage=1 00:06:09.028 --rc genhtml_legend=1 00:06:09.028 --rc geninfo_all_blocks=1 00:06:09.028 --no-external' 00:06:09.028 12:25:51 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:09.028 lcov: LCOV version 1.15 00:06:09.028 12:25:51 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:27.200 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:27.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:27.200 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:27.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:27.200 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:27.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:59.283 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:59.283 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:59.284 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:59.284 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:59.284 12:26:40 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:06:59.284 12:26:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:59.284 12:26:40 -- common/autotest_common.sh@10 -- # set +x 00:06:59.284 12:26:40 -- spdk/autotest.sh@102 -- # rm -f 00:06:59.284 12:26:40 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:59.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:59.284 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:06:59.284 12:26:41 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:06:59.284 12:26:41 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:06:59.284 12:26:41 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:06:59.284 12:26:41 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:06:59.284 12:26:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:59.284 12:26:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:06:59.284 12:26:41 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:06:59.284 12:26:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:59.284 12:26:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:59.284 12:26:41 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:06:59.284 12:26:41 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:06:59.284 12:26:41 -- spdk/autotest.sh@121 -- # grep -v p 00:06:59.284 12:26:41 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:06:59.284 12:26:41 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:06:59.284 12:26:41 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:06:59.285 12:26:41 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:06:59.285 12:26:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:59.285 No valid GPT data, bailing 00:06:59.285 12:26:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:59.285 12:26:41 -- scripts/common.sh@393 -- # pt= 00:06:59.285 12:26:41 -- scripts/common.sh@394 -- # return 1 00:06:59.285 12:26:41 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:59.285 1+0 records in 00:06:59.285 1+0 records out 00:06:59.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00664939 s, 158 MB/s 00:06:59.285 12:26:41 -- spdk/autotest.sh@129 -- # sync 00:06:59.285 12:26:41 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:59.285 12:26:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:59.285 12:26:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:01.188 12:26:43 -- spdk/autotest.sh@135 -- # uname -s 00:07:01.188 12:26:43 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:07:01.188 12:26:43 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:07:01.188 12:26:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:01.188 12:26:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.188 12:26:43 -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 ************************************ 00:07:01.188 START TEST setup.sh 00:07:01.188 ************************************ 00:07:01.188 12:26:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:07:01.446 * Looking for test storage... 00:07:01.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:01.446 12:26:43 -- setup/test-setup.sh@10 -- # uname -s 00:07:01.446 12:26:43 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:07:01.446 12:26:43 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:07:01.446 12:26:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:01.446 12:26:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.446 12:26:43 -- common/autotest_common.sh@10 -- # set +x 00:07:01.446 ************************************ 00:07:01.446 START TEST acl 00:07:01.446 ************************************ 00:07:01.446 12:26:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:07:01.446 * Looking for test storage... 00:07:01.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:01.446 12:26:43 -- setup/acl.sh@10 -- # get_zoned_devs 00:07:01.446 12:26:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:07:01.446 12:26:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:07:01.446 12:26:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:07:01.446 12:26:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:07:01.446 12:26:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:07:01.446 12:26:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:07:01.446 12:26:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:01.446 12:26:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:07:01.446 12:26:43 -- setup/acl.sh@12 -- # devs=() 00:07:01.446 12:26:43 -- setup/acl.sh@12 -- # declare -a devs 00:07:01.446 12:26:43 -- setup/acl.sh@13 -- # drivers=() 00:07:01.446 12:26:43 -- setup/acl.sh@13 -- # declare -A drivers 00:07:01.446 12:26:43 -- setup/acl.sh@51 -- # setup reset 00:07:01.446 12:26:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:01.446 12:26:43 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:02.386 12:26:44 -- setup/acl.sh@52 -- # collect_setup_devs 00:07:02.386 12:26:44 -- setup/acl.sh@16 -- # local dev driver 00:07:02.386 12:26:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:02.386 12:26:44 -- setup/acl.sh@15 -- # setup output status 00:07:02.386 12:26:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:02.386 12:26:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:02.386 Hugepages 00:07:02.386 node hugesize free / total 00:07:02.386 12:26:44 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:07:02.386 12:26:44 -- setup/acl.sh@19 -- # continue 00:07:02.386 12:26:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:02.386 00:07:02.386 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:02.386 12:26:44 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:07:02.386 12:26:44 -- setup/acl.sh@19 -- # continue 00:07:02.386 12:26:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:02.645 12:26:44 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:07:02.645 12:26:44 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:07:02.645 12:26:44 -- setup/acl.sh@20 -- # continue 00:07:02.645 12:26:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:02.645 12:26:45 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:07:02.645 12:26:45 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:02.645 12:26:45 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:07:02.645 12:26:45 -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:02.645 12:26:45 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:02.645 12:26:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:02.645 12:26:45 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:07:02.645 12:26:45 -- setup/acl.sh@54 -- # run_test denied denied 00:07:02.645 12:26:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:02.645 12:26:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.645 12:26:45 -- common/autotest_common.sh@10 -- # set +x 00:07:02.645 ************************************ 00:07:02.645 START TEST denied 00:07:02.645 ************************************ 00:07:02.645 12:26:45 -- common/autotest_common.sh@1104 -- # denied 00:07:02.645 12:26:45 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:07:02.645 12:26:45 -- setup/acl.sh@38 -- # setup output config 00:07:02.645 12:26:45 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:07:02.645 12:26:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:02.645 12:26:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:05.180 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:07:05.180 12:26:47 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:07:05.180 12:26:47 -- setup/acl.sh@28 -- # local dev driver 00:07:05.180 12:26:47 -- setup/acl.sh@30 -- # for dev in "$@" 00:07:05.180 12:26:47 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:07:05.180 12:26:47 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:07:05.180 12:26:47 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:05.180 12:26:47 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:05.180 12:26:47 -- setup/acl.sh@41 -- # setup reset 00:07:05.180 12:26:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:05.180 12:26:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:05.747 00:07:05.747 real 0m3.026s 00:07:05.747 user 0m0.580s 00:07:05.747 sys 0m2.529s 00:07:05.747 12:26:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.747 12:26:48 -- common/autotest_common.sh@10 -- # set +x 00:07:05.747 ************************************ 00:07:05.747 END TEST denied 00:07:05.747 ************************************ 00:07:05.747 12:26:48 -- setup/acl.sh@55 -- # run_test allowed allowed 00:07:05.747 12:26:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:05.747 12:26:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.747 12:26:48 -- common/autotest_common.sh@10 -- # set +x 00:07:05.747 ************************************ 00:07:05.747 START TEST allowed 00:07:05.747 ************************************ 00:07:05.747 12:26:48 -- common/autotest_common.sh@1104 -- # allowed 00:07:05.747 12:26:48 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:07:05.747 12:26:48 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:07:05.747 12:26:48 -- setup/acl.sh@45 -- # setup output config 00:07:05.747 12:26:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:05.747 12:26:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:07.654 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:07.654 12:26:49 -- setup/acl.sh@47 -- # verify 00:07:07.654 12:26:49 -- setup/acl.sh@28 -- # local dev driver 00:07:07.654 12:26:49 -- setup/acl.sh@48 -- # setup reset 00:07:07.654 12:26:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:07.654 12:26:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:07.913 00:07:07.913 real 0m2.186s 00:07:07.913 user 0m0.500s 00:07:07.913 sys 0m1.653s 00:07:07.913 ************************************ 00:07:07.913 END TEST allowed 00:07:07.913 ************************************ 00:07:07.913 12:26:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.913 12:26:50 -- common/autotest_common.sh@10 -- # set +x 00:07:08.173 00:07:08.173 real 0m6.676s 00:07:08.173 user 0m1.754s 00:07:08.173 sys 0m5.066s 00:07:08.173 12:26:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.173 12:26:50 -- common/autotest_common.sh@10 -- # set +x 00:07:08.173 ************************************ 00:07:08.173 END TEST acl 00:07:08.173 ************************************ 00:07:08.173 12:26:50 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:08.173 12:26:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:08.173 12:26:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.173 12:26:50 -- common/autotest_common.sh@10 -- # set +x 00:07:08.173 ************************************ 00:07:08.173 START TEST hugepages 00:07:08.173 ************************************ 00:07:08.173 12:26:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:08.173 * Looking for test storage... 00:07:08.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:08.173 12:26:50 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:07:08.173 12:26:50 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:07:08.173 12:26:50 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:07:08.173 12:26:50 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:07:08.173 12:26:50 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:07:08.173 12:26:50 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:07:08.173 12:26:50 -- setup/common.sh@17 -- # local get=Hugepagesize 00:07:08.173 12:26:50 -- setup/common.sh@18 -- # local node= 00:07:08.174 12:26:50 -- setup/common.sh@19 -- # local var val 00:07:08.174 12:26:50 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.174 12:26:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.174 12:26:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.174 12:26:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.174 12:26:50 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.174 12:26:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 2970480 kB' 'MemAvailable: 7397196 kB' 'Buffers: 35308 kB' 'Cached: 4529624 kB' 'SwapCached: 0 kB' 'Active: 999720 kB' 'Inactive: 3680468 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 125872 kB' 'Active(file): 998672 kB' 'Inactive(file): 3554596 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 144488 kB' 'Mapped: 68220 kB' 'Shmem: 2600 kB' 'KReclaimable: 194752 kB' 'Slab: 260208 kB' 'SReclaimable: 194752 kB' 'SUnreclaim: 65456 kB' 'KernelStack: 4452 kB' 'PageTables: 3596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 474344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.174 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.174 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # continue 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.175 12:26:50 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.175 12:26:50 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:08.175 12:26:50 -- setup/common.sh@33 -- # echo 2048 00:07:08.175 12:26:50 -- setup/common.sh@33 -- # return 0 00:07:08.175 12:26:50 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:07:08.175 12:26:50 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:07:08.175 12:26:50 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:07:08.175 12:26:50 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:07:08.175 12:26:50 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:07:08.175 12:26:50 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:07:08.175 12:26:50 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:07:08.175 12:26:50 -- setup/hugepages.sh@207 -- # get_nodes 00:07:08.175 12:26:50 -- setup/hugepages.sh@27 -- # local node 00:07:08.175 12:26:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:08.175 12:26:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:07:08.175 12:26:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:08.175 12:26:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:08.175 12:26:50 -- setup/hugepages.sh@208 -- # clear_hp 00:07:08.175 12:26:50 -- setup/hugepages.sh@37 -- # local node hp 00:07:08.175 12:26:50 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:08.175 12:26:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:08.175 12:26:50 -- setup/hugepages.sh@41 -- # echo 0 00:07:08.175 12:26:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:08.175 12:26:50 -- setup/hugepages.sh@41 -- # echo 0 00:07:08.175 12:26:50 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:08.175 12:26:50 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:08.175 12:26:50 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:07:08.175 12:26:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:08.175 12:26:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.175 12:26:50 -- common/autotest_common.sh@10 -- # set +x 00:07:08.435 ************************************ 00:07:08.435 START TEST default_setup 00:07:08.435 ************************************ 00:07:08.435 12:26:50 -- common/autotest_common.sh@1104 -- # default_setup 00:07:08.435 12:26:50 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:07:08.435 12:26:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:08.435 12:26:50 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:08.435 12:26:50 -- setup/hugepages.sh@51 -- # shift 00:07:08.435 12:26:50 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:08.435 12:26:50 -- setup/hugepages.sh@52 -- # local node_ids 00:07:08.435 12:26:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:08.435 12:26:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:08.435 12:26:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:08.435 12:26:50 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:08.435 12:26:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:08.435 12:26:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:08.435 12:26:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:08.435 12:26:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:08.435 12:26:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:08.435 12:26:50 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:08.435 12:26:50 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:08.435 12:26:50 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:08.435 12:26:50 -- setup/hugepages.sh@73 -- # return 0 00:07:08.435 12:26:50 -- setup/hugepages.sh@137 -- # setup output 00:07:08.435 12:26:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:08.435 12:26:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:08.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:07:08.952 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.890 12:26:52 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:07:09.890 12:26:52 -- setup/hugepages.sh@89 -- # local node 00:07:09.890 12:26:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:09.890 12:26:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:09.890 12:26:52 -- setup/hugepages.sh@92 -- # local surp 00:07:09.890 12:26:52 -- setup/hugepages.sh@93 -- # local resv 00:07:09.890 12:26:52 -- setup/hugepages.sh@94 -- # local anon 00:07:09.890 12:26:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:09.890 12:26:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:09.890 12:26:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:09.890 12:26:52 -- setup/common.sh@18 -- # local node= 00:07:09.890 12:26:52 -- setup/common.sh@19 -- # local var val 00:07:09.890 12:26:52 -- setup/common.sh@20 -- # local mem_f mem 00:07:09.890 12:26:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:09.890 12:26:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:09.890 12:26:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:09.890 12:26:52 -- setup/common.sh@28 -- # mapfile -t mem 00:07:09.890 12:26:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:09.890 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.890 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5052720 kB' 'MemAvailable: 9479416 kB' 'Buffers: 35308 kB' 'Cached: 4529620 kB' 'SwapCached: 0 kB' 'Active: 999752 kB' 'Inactive: 3696084 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141516 kB' 'Active(file): 998700 kB' 'Inactive(file): 3554568 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 160200 kB' 'Mapped: 68280 kB' 'Shmem: 2596 kB' 'KReclaimable: 194732 kB' 'Slab: 260064 kB' 'SReclaimable: 194732 kB' 'SUnreclaim: 65332 kB' 'KernelStack: 4384 kB' 'PageTables: 3716 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.891 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.891 12:26:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:09.891 12:26:52 -- setup/common.sh@33 -- # echo 0 00:07:09.891 12:26:52 -- setup/common.sh@33 -- # return 0 00:07:09.891 12:26:52 -- setup/hugepages.sh@97 -- # anon=0 00:07:09.891 12:26:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:09.891 12:26:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:09.891 12:26:52 -- setup/common.sh@18 -- # local node= 00:07:09.891 12:26:52 -- setup/common.sh@19 -- # local var val 00:07:09.891 12:26:52 -- setup/common.sh@20 -- # local mem_f mem 00:07:09.891 12:26:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:09.891 12:26:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:09.891 12:26:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:09.891 12:26:52 -- setup/common.sh@28 -- # mapfile -t mem 00:07:09.892 12:26:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5052708 kB' 'MemAvailable: 9479404 kB' 'Buffers: 35308 kB' 'Cached: 4529620 kB' 'SwapCached: 0 kB' 'Active: 999756 kB' 'Inactive: 3696416 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141848 kB' 'Active(file): 998700 kB' 'Inactive(file): 3554568 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 160284 kB' 'Mapped: 68204 kB' 'Shmem: 2596 kB' 'KReclaimable: 194732 kB' 'Slab: 260056 kB' 'SReclaimable: 194732 kB' 'SUnreclaim: 65324 kB' 'KernelStack: 4352 kB' 'PageTables: 3624 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.892 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.892 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.893 12:26:52 -- setup/common.sh@33 -- # echo 0 00:07:09.893 12:26:52 -- setup/common.sh@33 -- # return 0 00:07:09.893 12:26:52 -- setup/hugepages.sh@99 -- # surp=0 00:07:09.893 12:26:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:09.893 12:26:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:09.893 12:26:52 -- setup/common.sh@18 -- # local node= 00:07:09.893 12:26:52 -- setup/common.sh@19 -- # local var val 00:07:09.893 12:26:52 -- setup/common.sh@20 -- # local mem_f mem 00:07:09.893 12:26:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:09.893 12:26:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:09.893 12:26:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:09.893 12:26:52 -- setup/common.sh@28 -- # mapfile -t mem 00:07:09.893 12:26:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5052708 kB' 'MemAvailable: 9479404 kB' 'Buffers: 35308 kB' 'Cached: 4529620 kB' 'SwapCached: 0 kB' 'Active: 999748 kB' 'Inactive: 3696184 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141616 kB' 'Active(file): 998700 kB' 'Inactive(file): 3554568 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 160028 kB' 'Mapped: 68200 kB' 'Shmem: 2596 kB' 'KReclaimable: 194732 kB' 'Slab: 260056 kB' 'SReclaimable: 194732 kB' 'SUnreclaim: 65324 kB' 'KernelStack: 4320 kB' 'PageTables: 3532 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.893 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.893 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:09.894 12:26:52 -- setup/common.sh@33 -- # echo 0 00:07:09.894 12:26:52 -- setup/common.sh@33 -- # return 0 00:07:09.894 12:26:52 -- setup/hugepages.sh@100 -- # resv=0 00:07:09.894 12:26:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:09.894 nr_hugepages=1024 00:07:09.894 resv_hugepages=0 00:07:09.894 12:26:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:09.894 surplus_hugepages=0 00:07:09.894 12:26:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:09.894 anon_hugepages=0 00:07:09.894 12:26:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:09.894 12:26:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:09.894 12:26:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:09.894 12:26:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:09.894 12:26:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:09.894 12:26:52 -- setup/common.sh@18 -- # local node= 00:07:09.894 12:26:52 -- setup/common.sh@19 -- # local var val 00:07:09.894 12:26:52 -- setup/common.sh@20 -- # local mem_f mem 00:07:09.894 12:26:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:09.894 12:26:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:09.894 12:26:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:09.894 12:26:52 -- setup/common.sh@28 -- # mapfile -t mem 00:07:09.894 12:26:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5052708 kB' 'MemAvailable: 9479404 kB' 'Buffers: 35308 kB' 'Cached: 4529620 kB' 'SwapCached: 0 kB' 'Active: 999748 kB' 'Inactive: 3695904 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141336 kB' 'Active(file): 998700 kB' 'Inactive(file): 3554568 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 159984 kB' 'Mapped: 68200 kB' 'Shmem: 2596 kB' 'KReclaimable: 194732 kB' 'Slab: 260056 kB' 'SReclaimable: 194732 kB' 'SUnreclaim: 65324 kB' 'KernelStack: 4356 kB' 'PageTables: 3452 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.894 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.894 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.895 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.895 12:26:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:09.896 12:26:52 -- setup/common.sh@33 -- # echo 1024 00:07:09.896 12:26:52 -- setup/common.sh@33 -- # return 0 00:07:09.896 12:26:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:09.896 12:26:52 -- setup/hugepages.sh@112 -- # get_nodes 00:07:09.896 12:26:52 -- setup/hugepages.sh@27 -- # local node 00:07:09.896 12:26:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:09.896 12:26:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:09.896 12:26:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:09.896 12:26:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:09.896 12:26:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:09.896 12:26:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:09.896 12:26:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:09.896 12:26:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:09.896 12:26:52 -- setup/common.sh@18 -- # local node=0 00:07:09.896 12:26:52 -- setup/common.sh@19 -- # local var val 00:07:09.896 12:26:52 -- setup/common.sh@20 -- # local mem_f mem 00:07:09.896 12:26:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:09.896 12:26:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:09.896 12:26:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:09.896 12:26:52 -- setup/common.sh@28 -- # mapfile -t mem 00:07:09.896 12:26:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5052708 kB' 'MemUsed: 7190272 kB' 'SwapCached: 0 kB' 'Active: 999748 kB' 'Inactive: 3695796 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141228 kB' 'Active(file): 998700 kB' 'Inactive(file): 3554568 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'FilePages: 4564928 kB' 'Mapped: 68200 kB' 'AnonPages: 160112 kB' 'Shmem: 2596 kB' 'KernelStack: 4308 kB' 'PageTables: 3592 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194732 kB' 'Slab: 260056 kB' 'SReclaimable: 194732 kB' 'SUnreclaim: 65324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.896 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.896 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # continue 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.897 12:26:52 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.897 12:26:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:09.897 12:26:52 -- setup/common.sh@33 -- # echo 0 00:07:09.897 12:26:52 -- setup/common.sh@33 -- # return 0 00:07:09.897 12:26:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:09.897 12:26:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:09.897 12:26:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:09.897 12:26:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:09.897 12:26:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:09.897 node0=1024 expecting 1024 00:07:09.897 12:26:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:09.897 00:07:09.897 real 0m1.662s 00:07:09.897 user 0m0.336s 00:07:09.897 sys 0m1.317s 00:07:09.897 12:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.897 12:26:52 -- common/autotest_common.sh@10 -- # set +x 00:07:09.897 ************************************ 00:07:09.897 END TEST default_setup 00:07:09.897 ************************************ 00:07:10.155 12:26:52 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:07:10.155 12:26:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:10.155 12:26:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.155 12:26:52 -- common/autotest_common.sh@10 -- # set +x 00:07:10.155 ************************************ 00:07:10.155 START TEST per_node_1G_alloc 00:07:10.155 ************************************ 00:07:10.155 12:26:52 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:07:10.155 12:26:52 -- setup/hugepages.sh@143 -- # local IFS=, 00:07:10.155 12:26:52 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:07:10.155 12:26:52 -- setup/hugepages.sh@49 -- # local size=1048576 00:07:10.155 12:26:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:10.155 12:26:52 -- setup/hugepages.sh@51 -- # shift 00:07:10.156 12:26:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:10.156 12:26:52 -- setup/hugepages.sh@52 -- # local node_ids 00:07:10.156 12:26:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:10.156 12:26:52 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:10.156 12:26:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:10.156 12:26:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:10.156 12:26:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:10.156 12:26:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:10.156 12:26:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:10.156 12:26:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:10.156 12:26:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:10.156 12:26:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:10.156 12:26:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:10.156 12:26:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:10.156 12:26:52 -- setup/hugepages.sh@73 -- # return 0 00:07:10.156 12:26:52 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:07:10.156 12:26:52 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:07:10.156 12:26:52 -- setup/hugepages.sh@146 -- # setup output 00:07:10.156 12:26:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:10.156 12:26:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:10.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:07:10.415 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:10.983 12:26:53 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:07:10.983 12:26:53 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:07:10.983 12:26:53 -- setup/hugepages.sh@89 -- # local node 00:07:10.983 12:26:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:10.983 12:26:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:10.983 12:26:53 -- setup/hugepages.sh@92 -- # local surp 00:07:10.983 12:26:53 -- setup/hugepages.sh@93 -- # local resv 00:07:10.983 12:26:53 -- setup/hugepages.sh@94 -- # local anon 00:07:10.983 12:26:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:10.983 12:26:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:10.983 12:26:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:10.983 12:26:53 -- setup/common.sh@18 -- # local node= 00:07:10.983 12:26:53 -- setup/common.sh@19 -- # local var val 00:07:10.983 12:26:53 -- setup/common.sh@20 -- # local mem_f mem 00:07:10.983 12:26:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:10.983 12:26:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:10.983 12:26:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:10.983 12:26:53 -- setup/common.sh@28 -- # mapfile -t mem 00:07:10.983 12:26:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6097852 kB' 'MemAvailable: 10524548 kB' 'Buffers: 35308 kB' 'Cached: 4529624 kB' 'SwapCached: 0 kB' 'Active: 999784 kB' 'Inactive: 3696176 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141636 kB' 'Active(file): 998728 kB' 'Inactive(file): 3554540 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 160304 kB' 'Mapped: 68224 kB' 'Shmem: 2596 kB' 'KReclaimable: 194732 kB' 'Slab: 260024 kB' 'SReclaimable: 194732 kB' 'SUnreclaim: 65292 kB' 'KernelStack: 4312 kB' 'PageTables: 3424 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 490088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.983 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.983 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:10.984 12:26:53 -- setup/common.sh@33 -- # echo 0 00:07:10.984 12:26:53 -- setup/common.sh@33 -- # return 0 00:07:10.984 12:26:53 -- setup/hugepages.sh@97 -- # anon=0 00:07:10.984 12:26:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:10.984 12:26:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:10.984 12:26:53 -- setup/common.sh@18 -- # local node= 00:07:10.984 12:26:53 -- setup/common.sh@19 -- # local var val 00:07:10.984 12:26:53 -- setup/common.sh@20 -- # local mem_f mem 00:07:10.984 12:26:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:10.984 12:26:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:10.984 12:26:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:10.984 12:26:53 -- setup/common.sh@28 -- # mapfile -t mem 00:07:10.984 12:26:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:10.984 12:26:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6098104 kB' 'MemAvailable: 10524800 kB' 'Buffers: 35308 kB' 'Cached: 4529624 kB' 'SwapCached: 0 kB' 'Active: 999776 kB' 'Inactive: 3695992 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141452 kB' 'Active(file): 998728 kB' 'Inactive(file): 3554540 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 160116 kB' 'Mapped: 68200 kB' 'Shmem: 2596 kB' 'KReclaimable: 194732 kB' 'Slab: 260096 kB' 'SReclaimable: 194732 kB' 'SUnreclaim: 65364 kB' 'KernelStack: 4336 kB' 'PageTables: 3576 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 490088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:10.984 12:26:53 -- setup/common.sh@33 -- # echo 0 00:07:10.984 12:26:53 -- setup/common.sh@33 -- # return 0 00:07:10.984 12:26:53 -- setup/hugepages.sh@99 -- # surp=0 00:07:10.984 12:26:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:10.984 12:26:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:10.984 12:26:53 -- setup/common.sh@18 -- # local node= 00:07:10.984 12:26:53 -- setup/common.sh@19 -- # local var val 00:07:10.984 12:26:53 -- setup/common.sh@20 -- # local mem_f mem 00:07:10.984 12:26:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:10.984 12:26:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:10.984 12:26:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:10.984 12:26:53 -- setup/common.sh@28 -- # mapfile -t mem 00:07:10.984 12:26:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6098104 kB' 'MemAvailable: 10524804 kB' 'Buffers: 35308 kB' 'Cached: 4529624 kB' 'SwapCached: 0 kB' 'Active: 999776 kB' 'Inactive: 3696256 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141712 kB' 'Active(file): 998728 kB' 'Inactive(file): 3554544 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 160352 kB' 'Mapped: 68200 kB' 'Shmem: 2596 kB' 'KReclaimable: 194732 kB' 'Slab: 260096 kB' 'SReclaimable: 194732 kB' 'SUnreclaim: 65364 kB' 'KernelStack: 4336 kB' 'PageTables: 3576 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 490088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.984 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.984 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # continue 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:10.985 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:10.985 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:10.985 12:26:53 -- setup/common.sh@33 -- # echo 0 00:07:10.985 12:26:53 -- setup/common.sh@33 -- # return 0 00:07:10.985 12:26:53 -- setup/hugepages.sh@100 -- # resv=0 00:07:10.985 nr_hugepages=512 00:07:10.985 12:26:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:10.985 12:26:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:10.985 resv_hugepages=0 00:07:10.985 surplus_hugepages=0 00:07:10.985 12:26:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:10.985 anon_hugepages=0 00:07:10.985 12:26:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:11.245 12:26:53 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:11.245 12:26:53 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:11.245 12:26:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:11.245 12:26:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:11.245 12:26:53 -- setup/common.sh@18 -- # local node= 00:07:11.245 12:26:53 -- setup/common.sh@19 -- # local var val 00:07:11.245 12:26:53 -- setup/common.sh@20 -- # local mem_f mem 00:07:11.245 12:26:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.245 12:26:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.245 12:26:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.245 12:26:53 -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.245 12:26:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.245 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.245 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6098104 kB' 'MemAvailable: 10524804 kB' 'Buffers: 35308 kB' 'Cached: 4529624 kB' 'SwapCached: 0 kB' 'Active: 999776 kB' 'Inactive: 3695924 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141380 kB' 'Active(file): 998728 kB' 'Inactive(file): 3554544 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 159972 kB' 'Mapped: 68200 kB' 'Shmem: 2596 kB' 'KReclaimable: 194732 kB' 'Slab: 260096 kB' 'SReclaimable: 194732 kB' 'SUnreclaim: 65364 kB' 'KernelStack: 4288 kB' 'PageTables: 3456 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 490088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.246 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.246 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.247 12:26:53 -- setup/common.sh@33 -- # echo 512 00:07:11.247 12:26:53 -- setup/common.sh@33 -- # return 0 00:07:11.247 12:26:53 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:11.247 12:26:53 -- setup/hugepages.sh@112 -- # get_nodes 00:07:11.247 12:26:53 -- setup/hugepages.sh@27 -- # local node 00:07:11.247 12:26:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:11.247 12:26:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:11.247 12:26:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:11.247 12:26:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:11.247 12:26:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:11.247 12:26:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:11.247 12:26:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:11.247 12:26:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.247 12:26:53 -- setup/common.sh@18 -- # local node=0 00:07:11.247 12:26:53 -- setup/common.sh@19 -- # local var val 00:07:11.247 12:26:53 -- setup/common.sh@20 -- # local mem_f mem 00:07:11.247 12:26:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.247 12:26:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:11.247 12:26:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:11.247 12:26:53 -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.247 12:26:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6097604 kB' 'MemUsed: 6145376 kB' 'SwapCached: 0 kB' 'Active: 999776 kB' 'Inactive: 3695912 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141368 kB' 'Active(file): 998728 kB' 'Inactive(file): 3554544 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'FilePages: 4564932 kB' 'Mapped: 68200 kB' 'AnonPages: 159956 kB' 'Shmem: 2596 kB' 'KernelStack: 4340 kB' 'PageTables: 3416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194732 kB' 'Slab: 260096 kB' 'SReclaimable: 194732 kB' 'SUnreclaim: 65364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.247 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.247 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # continue 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # IFS=': ' 00:07:11.248 12:26:53 -- setup/common.sh@31 -- # read -r var val _ 00:07:11.248 12:26:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.248 12:26:53 -- setup/common.sh@33 -- # echo 0 00:07:11.248 12:26:53 -- setup/common.sh@33 -- # return 0 00:07:11.248 12:26:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:11.248 12:26:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:11.248 12:26:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:11.248 12:26:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:11.248 12:26:53 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:11.248 node0=512 expecting 512 00:07:11.248 12:26:53 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:11.248 00:07:11.248 real 0m1.134s 00:07:11.248 user 0m0.380s 00:07:11.248 sys 0m0.798s 00:07:11.248 12:26:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.248 12:26:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.248 ************************************ 00:07:11.248 END TEST per_node_1G_alloc 00:07:11.248 ************************************ 00:07:11.248 12:26:53 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:07:11.248 12:26:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.248 12:26:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.248 12:26:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.248 ************************************ 00:07:11.248 START TEST even_2G_alloc 00:07:11.248 ************************************ 00:07:11.248 12:26:53 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:07:11.248 12:26:53 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:07:11.248 12:26:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:11.248 12:26:53 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:11.248 12:26:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:11.248 12:26:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:11.248 12:26:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:11.248 12:26:53 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:11.248 12:26:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:11.248 12:26:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:11.248 12:26:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:11.248 12:26:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:11.248 12:26:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:11.248 12:26:53 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:11.248 12:26:53 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:11.248 12:26:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:11.248 12:26:53 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:07:11.248 12:26:53 -- setup/hugepages.sh@83 -- # : 0 00:07:11.248 12:26:53 -- setup/hugepages.sh@84 -- # : 0 00:07:11.248 12:26:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:11.248 12:26:53 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:07:11.248 12:26:53 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:07:11.248 12:26:53 -- setup/hugepages.sh@153 -- # setup output 00:07:11.248 12:26:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:11.248 12:26:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:11.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:07:11.866 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.125 12:26:54 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:07:12.125 12:26:54 -- setup/hugepages.sh@89 -- # local node 00:07:12.125 12:26:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:12.125 12:26:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:12.125 12:26:54 -- setup/hugepages.sh@92 -- # local surp 00:07:12.125 12:26:54 -- setup/hugepages.sh@93 -- # local resv 00:07:12.125 12:26:54 -- setup/hugepages.sh@94 -- # local anon 00:07:12.125 12:26:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:12.125 12:26:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:12.125 12:26:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:12.125 12:26:54 -- setup/common.sh@18 -- # local node= 00:07:12.125 12:26:54 -- setup/common.sh@19 -- # local var val 00:07:12.125 12:26:54 -- setup/common.sh@20 -- # local mem_f mem 00:07:12.125 12:26:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.125 12:26:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.125 12:26:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.125 12:26:54 -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.125 12:26:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.125 12:26:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049172 kB' 'MemAvailable: 9475860 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999796 kB' 'Inactive: 3696288 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141748 kB' 'Active(file): 998740 kB' 'Inactive(file): 3554540 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 440 kB' 'Writeback: 0 kB' 'AnonPages: 160464 kB' 'Mapped: 68092 kB' 'Shmem: 2604 kB' 'KReclaimable: 194712 kB' 'Slab: 259840 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65128 kB' 'KernelStack: 4300 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.125 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.125 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.126 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.126 12:26:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.126 12:26:54 -- setup/common.sh@33 -- # echo 0 00:07:12.126 12:26:54 -- setup/common.sh@33 -- # return 0 00:07:12.126 12:26:54 -- setup/hugepages.sh@97 -- # anon=0 00:07:12.126 12:26:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:12.126 12:26:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:12.126 12:26:54 -- setup/common.sh@18 -- # local node= 00:07:12.126 12:26:54 -- setup/common.sh@19 -- # local var val 00:07:12.126 12:26:54 -- setup/common.sh@20 -- # local mem_f mem 00:07:12.126 12:26:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.126 12:26:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.126 12:26:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.126 12:26:54 -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.126 12:26:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.127 12:26:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049692 kB' 'MemAvailable: 9476380 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999796 kB' 'Inactive: 3696188 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141648 kB' 'Active(file): 998740 kB' 'Inactive(file): 3554540 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 440 kB' 'Writeback: 0 kB' 'AnonPages: 160368 kB' 'Mapped: 68092 kB' 'Shmem: 2604 kB' 'KReclaimable: 194712 kB' 'Slab: 259840 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65128 kB' 'KernelStack: 4316 kB' 'PageTables: 3708 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.127 12:26:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.127 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.127 12:26:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.127 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.127 12:26:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.127 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.127 12:26:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.127 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.127 12:26:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.127 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.127 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.388 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.389 12:26:54 -- setup/common.sh@33 -- # echo 0 00:07:12.389 12:26:54 -- setup/common.sh@33 -- # return 0 00:07:12.389 12:26:54 -- setup/hugepages.sh@99 -- # surp=0 00:07:12.389 12:26:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:12.389 12:26:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:12.389 12:26:54 -- setup/common.sh@18 -- # local node= 00:07:12.389 12:26:54 -- setup/common.sh@19 -- # local var val 00:07:12.389 12:26:54 -- setup/common.sh@20 -- # local mem_f mem 00:07:12.389 12:26:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.389 12:26:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.389 12:26:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.389 12:26:54 -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.389 12:26:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049692 kB' 'MemAvailable: 9476380 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999788 kB' 'Inactive: 3696232 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141692 kB' 'Active(file): 998740 kB' 'Inactive(file): 3554540 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 440 kB' 'Writeback: 0 kB' 'AnonPages: 160424 kB' 'Mapped: 68084 kB' 'Shmem: 2604 kB' 'KReclaimable: 194712 kB' 'Slab: 259840 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65128 kB' 'KernelStack: 4332 kB' 'PageTables: 3736 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.389 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.390 12:26:54 -- setup/common.sh@33 -- # echo 0 00:07:12.390 12:26:54 -- setup/common.sh@33 -- # return 0 00:07:12.390 12:26:54 -- setup/hugepages.sh@100 -- # resv=0 00:07:12.390 12:26:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:12.390 nr_hugepages=1024 00:07:12.390 resv_hugepages=0 00:07:12.390 12:26:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:12.390 surplus_hugepages=0 00:07:12.390 12:26:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:12.390 anon_hugepages=0 00:07:12.390 12:26:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:12.390 12:26:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:12.390 12:26:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:12.390 12:26:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:12.390 12:26:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:12.390 12:26:54 -- setup/common.sh@18 -- # local node= 00:07:12.390 12:26:54 -- setup/common.sh@19 -- # local var val 00:07:12.390 12:26:54 -- setup/common.sh@20 -- # local mem_f mem 00:07:12.390 12:26:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.390 12:26:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.390 12:26:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.390 12:26:54 -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.390 12:26:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049692 kB' 'MemAvailable: 9476380 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999788 kB' 'Inactive: 3695972 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141432 kB' 'Active(file): 998740 kB' 'Inactive(file): 3554540 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 440 kB' 'Writeback: 0 kB' 'AnonPages: 160164 kB' 'Mapped: 68084 kB' 'Shmem: 2604 kB' 'KReclaimable: 194712 kB' 'Slab: 259840 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65128 kB' 'KernelStack: 4400 kB' 'PageTables: 3996 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.390 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.391 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.391 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.392 12:26:54 -- setup/common.sh@33 -- # echo 1024 00:07:12.392 12:26:54 -- setup/common.sh@33 -- # return 0 00:07:12.392 12:26:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:12.392 12:26:54 -- setup/hugepages.sh@112 -- # get_nodes 00:07:12.392 12:26:54 -- setup/hugepages.sh@27 -- # local node 00:07:12.392 12:26:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:12.392 12:26:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:12.392 12:26:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:12.392 12:26:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:12.392 12:26:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:12.392 12:26:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:12.392 12:26:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:12.392 12:26:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:12.392 12:26:54 -- setup/common.sh@18 -- # local node=0 00:07:12.392 12:26:54 -- setup/common.sh@19 -- # local var val 00:07:12.392 12:26:54 -- setup/common.sh@20 -- # local mem_f mem 00:07:12.392 12:26:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.392 12:26:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:12.392 12:26:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:12.392 12:26:54 -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.392 12:26:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5049692 kB' 'MemUsed: 7193288 kB' 'SwapCached: 0 kB' 'Active: 999788 kB' 'Inactive: 3696248 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141708 kB' 'Active(file): 998740 kB' 'Inactive(file): 3554540 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 440 kB' 'Writeback: 0 kB' 'FilePages: 4564948 kB' 'Mapped: 68084 kB' 'AnonPages: 160388 kB' 'Shmem: 2604 kB' 'KernelStack: 4452 kB' 'PageTables: 3956 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194712 kB' 'Slab: 259840 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.393 12:26:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.393 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.393 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.393 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.393 12:26:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.393 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.393 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.393 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.393 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.393 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.393 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.393 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.393 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.393 12:26:54 -- setup/common.sh@32 -- # continue 00:07:12.393 12:26:54 -- setup/common.sh@31 -- # IFS=': ' 00:07:12.393 12:26:54 -- setup/common.sh@31 -- # read -r var val _ 00:07:12.393 12:26:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.393 12:26:54 -- setup/common.sh@33 -- # echo 0 00:07:12.393 12:26:54 -- setup/common.sh@33 -- # return 0 00:07:12.393 12:26:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:12.393 12:26:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:12.393 12:26:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:12.393 12:26:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:12.393 12:26:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:12.393 node0=1024 expecting 1024 00:07:12.393 12:26:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:12.393 00:07:12.393 real 0m1.102s 00:07:12.393 user 0m0.342s 00:07:12.393 sys 0m0.813s 00:07:12.393 12:26:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.393 12:26:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.393 ************************************ 00:07:12.393 END TEST even_2G_alloc 00:07:12.393 ************************************ 00:07:12.393 12:26:54 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:07:12.393 12:26:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.393 12:26:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.393 12:26:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.393 ************************************ 00:07:12.393 START TEST odd_alloc 00:07:12.393 ************************************ 00:07:12.393 12:26:54 -- common/autotest_common.sh@1104 -- # odd_alloc 00:07:12.393 12:26:54 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:07:12.393 12:26:54 -- setup/hugepages.sh@49 -- # local size=2098176 00:07:12.393 12:26:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:12.393 12:26:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:12.393 12:26:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:07:12.393 12:26:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:12.393 12:26:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:12.393 12:26:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:12.393 12:26:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:07:12.393 12:26:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:12.393 12:26:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:12.393 12:26:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:12.393 12:26:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:12.393 12:26:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:12.393 12:26:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:12.393 12:26:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:07:12.393 12:26:54 -- setup/hugepages.sh@83 -- # : 0 00:07:12.393 12:26:54 -- setup/hugepages.sh@84 -- # : 0 00:07:12.393 12:26:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:12.393 12:26:54 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:07:12.393 12:26:54 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:07:12.393 12:26:54 -- setup/hugepages.sh@160 -- # setup output 00:07:12.393 12:26:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:12.393 12:26:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:07:12.959 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.218 12:26:55 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:07:13.218 12:26:55 -- setup/hugepages.sh@89 -- # local node 00:07:13.218 12:26:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:13.218 12:26:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:13.218 12:26:55 -- setup/hugepages.sh@92 -- # local surp 00:07:13.218 12:26:55 -- setup/hugepages.sh@93 -- # local resv 00:07:13.218 12:26:55 -- setup/hugepages.sh@94 -- # local anon 00:07:13.218 12:26:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:13.218 12:26:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:13.218 12:26:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:13.218 12:26:55 -- setup/common.sh@18 -- # local node= 00:07:13.218 12:26:55 -- setup/common.sh@19 -- # local var val 00:07:13.218 12:26:55 -- setup/common.sh@20 -- # local mem_f mem 00:07:13.218 12:26:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.218 12:26:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.218 12:26:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.218 12:26:55 -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.218 12:26:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5053124 kB' 'MemAvailable: 9479816 kB' 'Buffers: 35316 kB' 'Cached: 4529628 kB' 'SwapCached: 0 kB' 'Active: 999804 kB' 'Inactive: 3692968 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 138440 kB' 'Active(file): 998756 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 16 kB' 'AnonPages: 157100 kB' 'Mapped: 67224 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259872 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65160 kB' 'KernelStack: 4300 kB' 'PageTables: 3432 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 481472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.218 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.218 12:26:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.218 12:26:55 -- setup/common.sh@33 -- # echo 0 00:07:13.218 12:26:55 -- setup/common.sh@33 -- # return 0 00:07:13.479 12:26:55 -- setup/hugepages.sh@97 -- # anon=0 00:07:13.479 12:26:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:13.479 12:26:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.479 12:26:55 -- setup/common.sh@18 -- # local node= 00:07:13.479 12:26:55 -- setup/common.sh@19 -- # local var val 00:07:13.479 12:26:55 -- setup/common.sh@20 -- # local mem_f mem 00:07:13.479 12:26:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.479 12:26:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.479 12:26:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.479 12:26:55 -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.479 12:26:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5053124 kB' 'MemAvailable: 9479816 kB' 'Buffers: 35316 kB' 'Cached: 4529628 kB' 'SwapCached: 0 kB' 'Active: 999804 kB' 'Inactive: 3692968 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 138440 kB' 'Active(file): 998756 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 16 kB' 'AnonPages: 157100 kB' 'Mapped: 67224 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259872 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65160 kB' 'KernelStack: 4300 kB' 'PageTables: 3432 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 481472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.479 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.479 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.480 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.480 12:26:55 -- setup/common.sh@33 -- # echo 0 00:07:13.480 12:26:55 -- setup/common.sh@33 -- # return 0 00:07:13.480 12:26:55 -- setup/hugepages.sh@99 -- # surp=0 00:07:13.480 12:26:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:13.480 12:26:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:13.480 12:26:55 -- setup/common.sh@18 -- # local node= 00:07:13.480 12:26:55 -- setup/common.sh@19 -- # local var val 00:07:13.480 12:26:55 -- setup/common.sh@20 -- # local mem_f mem 00:07:13.480 12:26:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.480 12:26:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.480 12:26:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.480 12:26:55 -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.480 12:26:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.480 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5053124 kB' 'MemAvailable: 9479816 kB' 'Buffers: 35316 kB' 'Cached: 4529628 kB' 'SwapCached: 0 kB' 'Active: 999804 kB' 'Inactive: 3692936 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 138408 kB' 'Active(file): 998756 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 16 kB' 'AnonPages: 157036 kB' 'Mapped: 67224 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259872 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65160 kB' 'KernelStack: 4284 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 481472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.481 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.481 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.482 12:26:55 -- setup/common.sh@33 -- # echo 0 00:07:13.482 12:26:55 -- setup/common.sh@33 -- # return 0 00:07:13.482 12:26:55 -- setup/hugepages.sh@100 -- # resv=0 00:07:13.482 12:26:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:07:13.482 nr_hugepages=1025 00:07:13.482 resv_hugepages=0 00:07:13.482 12:26:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:13.482 12:26:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:13.482 surplus_hugepages=0 00:07:13.482 anon_hugepages=0 00:07:13.482 12:26:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:13.482 12:26:55 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:13.482 12:26:55 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:07:13.482 12:26:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:13.482 12:26:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:13.482 12:26:55 -- setup/common.sh@18 -- # local node= 00:07:13.482 12:26:55 -- setup/common.sh@19 -- # local var val 00:07:13.482 12:26:55 -- setup/common.sh@20 -- # local mem_f mem 00:07:13.482 12:26:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.482 12:26:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.482 12:26:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.482 12:26:55 -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.482 12:26:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5053124 kB' 'MemAvailable: 9479816 kB' 'Buffers: 35316 kB' 'Cached: 4529628 kB' 'SwapCached: 0 kB' 'Active: 999804 kB' 'Inactive: 3692936 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 138408 kB' 'Active(file): 998756 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 16 kB' 'AnonPages: 157036 kB' 'Mapped: 67224 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259872 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65160 kB' 'KernelStack: 4352 kB' 'PageTables: 3652 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 481472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.482 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.482 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.483 12:26:55 -- setup/common.sh@33 -- # echo 1025 00:07:13.483 12:26:55 -- setup/common.sh@33 -- # return 0 00:07:13.483 12:26:55 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:13.483 12:26:55 -- setup/hugepages.sh@112 -- # get_nodes 00:07:13.483 12:26:55 -- setup/hugepages.sh@27 -- # local node 00:07:13.483 12:26:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:13.483 12:26:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:07:13.483 12:26:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:13.483 12:26:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:13.483 12:26:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:13.483 12:26:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:13.483 12:26:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:13.483 12:26:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.483 12:26:55 -- setup/common.sh@18 -- # local node=0 00:07:13.483 12:26:55 -- setup/common.sh@19 -- # local var val 00:07:13.483 12:26:55 -- setup/common.sh@20 -- # local mem_f mem 00:07:13.483 12:26:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.483 12:26:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:13.483 12:26:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:13.483 12:26:55 -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.483 12:26:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.483 12:26:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5052620 kB' 'MemUsed: 7190360 kB' 'SwapCached: 0 kB' 'Active: 999804 kB' 'Inactive: 3693196 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 138668 kB' 'Active(file): 998756 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 96 kB' 'Writeback: 16 kB' 'FilePages: 4564944 kB' 'Mapped: 67224 kB' 'AnonPages: 157296 kB' 'Shmem: 2596 kB' 'KernelStack: 4420 kB' 'PageTables: 3652 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194712 kB' 'Slab: 259872 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.483 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.483 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # continue 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.484 12:26:55 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.484 12:26:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.484 12:26:55 -- setup/common.sh@33 -- # echo 0 00:07:13.484 12:26:55 -- setup/common.sh@33 -- # return 0 00:07:13.484 12:26:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:13.484 12:26:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:13.484 12:26:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:13.484 12:26:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:13.484 12:26:55 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:07:13.484 node0=1025 expecting 1025 00:07:13.484 12:26:55 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:07:13.484 00:07:13.484 real 0m1.072s 00:07:13.484 user 0m0.345s 00:07:13.484 sys 0m0.765s 00:07:13.484 12:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.484 12:26:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.484 ************************************ 00:07:13.484 END TEST odd_alloc 00:07:13.484 ************************************ 00:07:13.484 12:26:55 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:07:13.484 12:26:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.484 12:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.484 12:26:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.484 ************************************ 00:07:13.484 START TEST custom_alloc 00:07:13.484 ************************************ 00:07:13.484 12:26:55 -- common/autotest_common.sh@1104 -- # custom_alloc 00:07:13.484 12:26:55 -- setup/hugepages.sh@167 -- # local IFS=, 00:07:13.484 12:26:55 -- setup/hugepages.sh@169 -- # local node 00:07:13.484 12:26:55 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:07:13.484 12:26:55 -- setup/hugepages.sh@170 -- # local nodes_hp 00:07:13.484 12:26:55 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:13.484 12:26:55 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:07:13.484 12:26:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:07:13.484 12:26:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:13.484 12:26:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:13.484 12:26:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:13.484 12:26:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:13.484 12:26:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:13.484 12:26:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:13.484 12:26:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:13.484 12:26:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:13.484 12:26:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:13.484 12:26:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:13.484 12:26:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:13.484 12:26:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:13.484 12:26:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:13.484 12:26:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:13.484 12:26:55 -- setup/hugepages.sh@83 -- # : 0 00:07:13.484 12:26:55 -- setup/hugepages.sh@84 -- # : 0 00:07:13.484 12:26:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:13.484 12:26:55 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:07:13.485 12:26:55 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:07:13.485 12:26:55 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:13.485 12:26:55 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:13.485 12:26:55 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:13.485 12:26:55 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:07:13.485 12:26:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:13.485 12:26:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:13.485 12:26:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:13.485 12:26:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:13.485 12:26:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:13.485 12:26:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:13.485 12:26:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:13.485 12:26:55 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:07:13.485 12:26:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:13.485 12:26:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:13.485 12:26:55 -- setup/hugepages.sh@78 -- # return 0 00:07:13.485 12:26:55 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:07:13.485 12:26:55 -- setup/hugepages.sh@187 -- # setup output 00:07:13.485 12:26:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:13.485 12:26:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:14.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:07:14.052 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:14.313 12:26:56 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:07:14.313 12:26:56 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:14.313 12:26:56 -- setup/hugepages.sh@89 -- # local node 00:07:14.313 12:26:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:14.313 12:26:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:14.313 12:26:56 -- setup/hugepages.sh@92 -- # local surp 00:07:14.313 12:26:56 -- setup/hugepages.sh@93 -- # local resv 00:07:14.313 12:26:56 -- setup/hugepages.sh@94 -- # local anon 00:07:14.313 12:26:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:14.313 12:26:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:14.313 12:26:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:14.313 12:26:56 -- setup/common.sh@18 -- # local node= 00:07:14.313 12:26:56 -- setup/common.sh@19 -- # local var val 00:07:14.313 12:26:56 -- setup/common.sh@20 -- # local mem_f mem 00:07:14.313 12:26:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.313 12:26:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.313 12:26:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.313 12:26:56 -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.313 12:26:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.313 12:26:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6106312 kB' 'MemAvailable: 10533008 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999808 kB' 'Inactive: 3692800 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 138268 kB' 'Active(file): 998756 kB' 'Inactive(file): 3554532 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'AnonPages: 157112 kB' 'Mapped: 67256 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259736 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65024 kB' 'KernelStack: 4272 kB' 'PageTables: 3316 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.313 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.313 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.314 12:26:56 -- setup/common.sh@33 -- # echo 0 00:07:14.314 12:26:56 -- setup/common.sh@33 -- # return 0 00:07:14.314 12:26:56 -- setup/hugepages.sh@97 -- # anon=0 00:07:14.314 12:26:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:14.314 12:26:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.314 12:26:56 -- setup/common.sh@18 -- # local node= 00:07:14.314 12:26:56 -- setup/common.sh@19 -- # local var val 00:07:14.314 12:26:56 -- setup/common.sh@20 -- # local mem_f mem 00:07:14.314 12:26:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.314 12:26:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.314 12:26:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.314 12:26:56 -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.314 12:26:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6106564 kB' 'MemAvailable: 10533260 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999800 kB' 'Inactive: 3692892 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 138360 kB' 'Active(file): 998756 kB' 'Inactive(file): 3554532 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'AnonPages: 157180 kB' 'Mapped: 67240 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259664 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 64952 kB' 'KernelStack: 4288 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.314 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.314 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.315 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.315 12:26:56 -- setup/common.sh@33 -- # echo 0 00:07:14.315 12:26:56 -- setup/common.sh@33 -- # return 0 00:07:14.315 12:26:56 -- setup/hugepages.sh@99 -- # surp=0 00:07:14.315 12:26:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:14.315 12:26:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:14.315 12:26:56 -- setup/common.sh@18 -- # local node= 00:07:14.315 12:26:56 -- setup/common.sh@19 -- # local var val 00:07:14.315 12:26:56 -- setup/common.sh@20 -- # local mem_f mem 00:07:14.315 12:26:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.315 12:26:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.315 12:26:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.315 12:26:56 -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.315 12:26:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.315 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6106564 kB' 'MemAvailable: 10533260 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999800 kB' 'Inactive: 3692856 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 138324 kB' 'Active(file): 998756 kB' 'Inactive(file): 3554532 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'AnonPages: 156920 kB' 'Mapped: 67240 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259664 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 64952 kB' 'KernelStack: 4288 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.316 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.316 12:26:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.317 12:26:56 -- setup/common.sh@33 -- # echo 0 00:07:14.317 12:26:56 -- setup/common.sh@33 -- # return 0 00:07:14.317 12:26:56 -- setup/hugepages.sh@100 -- # resv=0 00:07:14.317 12:26:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:14.317 nr_hugepages=512 00:07:14.317 resv_hugepages=0 00:07:14.317 12:26:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:14.317 surplus_hugepages=0 00:07:14.317 12:26:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:14.317 anon_hugepages=0 00:07:14.317 12:26:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:14.317 12:26:56 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:14.317 12:26:56 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:14.317 12:26:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:14.317 12:26:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:14.317 12:26:56 -- setup/common.sh@18 -- # local node= 00:07:14.317 12:26:56 -- setup/common.sh@19 -- # local var val 00:07:14.317 12:26:56 -- setup/common.sh@20 -- # local mem_f mem 00:07:14.317 12:26:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.317 12:26:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.317 12:26:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.317 12:26:56 -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.317 12:26:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.317 12:26:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6106564 kB' 'MemAvailable: 10533260 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999800 kB' 'Inactive: 3692852 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 138320 kB' 'Active(file): 998756 kB' 'Inactive(file): 3554532 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'AnonPages: 156916 kB' 'Mapped: 67240 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259664 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 64952 kB' 'KernelStack: 4272 kB' 'PageTables: 3308 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.317 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.317 12:26:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.318 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.318 12:26:56 -- setup/common.sh@33 -- # echo 512 00:07:14.318 12:26:56 -- setup/common.sh@33 -- # return 0 00:07:14.318 12:26:56 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:14.318 12:26:56 -- setup/hugepages.sh@112 -- # get_nodes 00:07:14.318 12:26:56 -- setup/hugepages.sh@27 -- # local node 00:07:14.318 12:26:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:14.318 12:26:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:14.318 12:26:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:14.318 12:26:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:14.318 12:26:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:14.318 12:26:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:14.318 12:26:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:14.318 12:26:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.318 12:26:56 -- setup/common.sh@18 -- # local node=0 00:07:14.318 12:26:56 -- setup/common.sh@19 -- # local var val 00:07:14.318 12:26:56 -- setup/common.sh@20 -- # local mem_f mem 00:07:14.318 12:26:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.318 12:26:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:14.318 12:26:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:14.318 12:26:56 -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.318 12:26:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.318 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6106564 kB' 'MemUsed: 6136416 kB' 'SwapCached: 0 kB' 'Active: 999800 kB' 'Inactive: 3692968 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 138436 kB' 'Active(file): 998756 kB' 'Inactive(file): 3554532 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'FilePages: 4564948 kB' 'Mapped: 67240 kB' 'AnonPages: 157288 kB' 'Shmem: 2596 kB' 'KernelStack: 4324 kB' 'PageTables: 3268 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194712 kB' 'Slab: 259664 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 64952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.319 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.319 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # continue 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # IFS=': ' 00:07:14.591 12:26:56 -- setup/common.sh@31 -- # read -r var val _ 00:07:14.591 12:26:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.591 12:26:56 -- setup/common.sh@33 -- # echo 0 00:07:14.591 12:26:56 -- setup/common.sh@33 -- # return 0 00:07:14.591 12:26:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:14.591 12:26:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:14.591 12:26:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:14.591 12:26:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:14.591 12:26:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:14.591 node0=512 expecting 512 00:07:14.591 12:26:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:14.591 00:07:14.591 real 0m0.897s 00:07:14.591 user 0m0.393s 00:07:14.591 sys 0m0.553s 00:07:14.591 12:26:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.591 12:26:56 -- common/autotest_common.sh@10 -- # set +x 00:07:14.591 ************************************ 00:07:14.591 END TEST custom_alloc 00:07:14.591 ************************************ 00:07:14.591 12:26:56 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:14.591 12:26:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.591 12:26:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.591 12:26:56 -- common/autotest_common.sh@10 -- # set +x 00:07:14.591 ************************************ 00:07:14.591 START TEST no_shrink_alloc 00:07:14.591 ************************************ 00:07:14.591 12:26:56 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:07:14.591 12:26:56 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:14.591 12:26:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:14.591 12:26:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:14.591 12:26:56 -- setup/hugepages.sh@51 -- # shift 00:07:14.591 12:26:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:14.591 12:26:56 -- setup/hugepages.sh@52 -- # local node_ids 00:07:14.591 12:26:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:14.591 12:26:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:14.591 12:26:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:14.591 12:26:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:14.591 12:26:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:14.591 12:26:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:14.591 12:26:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:14.591 12:26:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:14.591 12:26:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:14.591 12:26:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:14.591 12:26:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:14.591 12:26:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:14.591 12:26:56 -- setup/hugepages.sh@73 -- # return 0 00:07:14.591 12:26:56 -- setup/hugepages.sh@198 -- # setup output 00:07:14.591 12:26:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:14.591 12:26:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:14.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:07:14.850 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:15.420 12:26:57 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:15.420 12:26:57 -- setup/hugepages.sh@89 -- # local node 00:07:15.420 12:26:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:15.420 12:26:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:15.420 12:26:57 -- setup/hugepages.sh@92 -- # local surp 00:07:15.420 12:26:57 -- setup/hugepages.sh@93 -- # local resv 00:07:15.420 12:26:57 -- setup/hugepages.sh@94 -- # local anon 00:07:15.420 12:26:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:15.420 12:26:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:15.420 12:26:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:15.420 12:26:57 -- setup/common.sh@18 -- # local node= 00:07:15.420 12:26:57 -- setup/common.sh@19 -- # local var val 00:07:15.420 12:26:57 -- setup/common.sh@20 -- # local mem_f mem 00:07:15.420 12:26:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.420 12:26:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.420 12:26:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.420 12:26:57 -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.420 12:26:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.420 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.420 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.420 12:26:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5056316 kB' 'MemAvailable: 9483012 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999812 kB' 'Inactive: 3692996 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 138468 kB' 'Active(file): 998760 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 157356 kB' 'Mapped: 67260 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259960 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65248 kB' 'KernelStack: 4288 kB' 'PageTables: 3352 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:15.420 12:26:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.420 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.420 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.420 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.420 12:26:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.420 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.420 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.420 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.420 12:26:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.420 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.420 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.420 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.420 12:26:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.420 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.420 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.420 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.421 12:26:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.421 12:26:57 -- setup/common.sh@33 -- # echo 0 00:07:15.421 12:26:57 -- setup/common.sh@33 -- # return 0 00:07:15.421 12:26:57 -- setup/hugepages.sh@97 -- # anon=0 00:07:15.421 12:26:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:15.421 12:26:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:15.421 12:26:57 -- setup/common.sh@18 -- # local node= 00:07:15.421 12:26:57 -- setup/common.sh@19 -- # local var val 00:07:15.421 12:26:57 -- setup/common.sh@20 -- # local mem_f mem 00:07:15.421 12:26:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.421 12:26:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.421 12:26:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.421 12:26:57 -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.421 12:26:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.421 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5056316 kB' 'MemAvailable: 9483012 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999804 kB' 'Inactive: 3692988 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 138460 kB' 'Active(file): 998760 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 157072 kB' 'Mapped: 67240 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259976 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65264 kB' 'KernelStack: 4320 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.422 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.422 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.423 12:26:57 -- setup/common.sh@33 -- # echo 0 00:07:15.423 12:26:57 -- setup/common.sh@33 -- # return 0 00:07:15.423 12:26:57 -- setup/hugepages.sh@99 -- # surp=0 00:07:15.423 12:26:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:15.423 12:26:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:15.423 12:26:57 -- setup/common.sh@18 -- # local node= 00:07:15.423 12:26:57 -- setup/common.sh@19 -- # local var val 00:07:15.423 12:26:57 -- setup/common.sh@20 -- # local mem_f mem 00:07:15.423 12:26:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.423 12:26:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.423 12:26:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.423 12:26:57 -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.423 12:26:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5056316 kB' 'MemAvailable: 9483012 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999804 kB' 'Inactive: 3692624 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 138096 kB' 'Active(file): 998760 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 156988 kB' 'Mapped: 67240 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259976 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65264 kB' 'KernelStack: 4304 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.423 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.423 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.424 12:26:57 -- setup/common.sh@33 -- # echo 0 00:07:15.424 12:26:57 -- setup/common.sh@33 -- # return 0 00:07:15.424 nr_hugepages=1024 00:07:15.424 resv_hugepages=0 00:07:15.424 surplus_hugepages=0 00:07:15.424 anon_hugepages=0 00:07:15.424 12:26:57 -- setup/hugepages.sh@100 -- # resv=0 00:07:15.424 12:26:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:15.424 12:26:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:15.424 12:26:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:15.424 12:26:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:15.424 12:26:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:15.424 12:26:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:15.424 12:26:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:15.424 12:26:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:15.424 12:26:57 -- setup/common.sh@18 -- # local node= 00:07:15.424 12:26:57 -- setup/common.sh@19 -- # local var val 00:07:15.424 12:26:57 -- setup/common.sh@20 -- # local mem_f mem 00:07:15.424 12:26:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.424 12:26:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.424 12:26:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.424 12:26:57 -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.424 12:26:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.424 12:26:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5055816 kB' 'MemAvailable: 9482512 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999804 kB' 'Inactive: 3692716 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 138188 kB' 'Active(file): 998760 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 157072 kB' 'Mapped: 67240 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259976 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65264 kB' 'KernelStack: 4304 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.424 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.424 12:26:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.425 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.425 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.426 12:26:57 -- setup/common.sh@33 -- # echo 1024 00:07:15.426 12:26:57 -- setup/common.sh@33 -- # return 0 00:07:15.426 12:26:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:15.426 12:26:57 -- setup/hugepages.sh@112 -- # get_nodes 00:07:15.426 12:26:57 -- setup/hugepages.sh@27 -- # local node 00:07:15.426 12:26:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:15.426 12:26:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:15.426 12:26:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:15.426 12:26:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:15.426 12:26:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:15.426 12:26:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:15.426 12:26:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:15.426 12:26:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:15.426 12:26:57 -- setup/common.sh@18 -- # local node=0 00:07:15.426 12:26:57 -- setup/common.sh@19 -- # local var val 00:07:15.426 12:26:57 -- setup/common.sh@20 -- # local mem_f mem 00:07:15.426 12:26:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.426 12:26:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:15.426 12:26:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:15.426 12:26:57 -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.426 12:26:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.426 12:26:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5056068 kB' 'MemUsed: 7186912 kB' 'SwapCached: 0 kB' 'Active: 999804 kB' 'Inactive: 3692900 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 138372 kB' 'Active(file): 998760 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'FilePages: 4564948 kB' 'Mapped: 67240 kB' 'AnonPages: 156996 kB' 'Shmem: 2596 kB' 'KernelStack: 4356 kB' 'PageTables: 3352 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194712 kB' 'Slab: 259976 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.426 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.426 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.427 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.427 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.427 12:26:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.427 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.427 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.427 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.427 12:26:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.427 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.427 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.427 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.427 12:26:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.427 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.427 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.427 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.427 12:26:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.427 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.427 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.427 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.683 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.683 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.683 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.683 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.683 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.683 12:26:57 -- setup/common.sh@32 -- # continue 00:07:15.683 12:26:57 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.683 12:26:57 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.683 12:26:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.683 12:26:57 -- setup/common.sh@33 -- # echo 0 00:07:15.683 12:26:57 -- setup/common.sh@33 -- # return 0 00:07:15.683 12:26:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:15.683 12:26:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:15.683 12:26:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:15.683 12:26:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:15.683 12:26:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:15.683 node0=1024 expecting 1024 00:07:15.683 12:26:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:15.683 12:26:57 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:15.683 12:26:57 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:15.683 12:26:57 -- setup/hugepages.sh@202 -- # setup output 00:07:15.683 12:26:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:15.683 12:26:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:15.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:07:15.943 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:15.943 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:15.943 12:26:58 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:15.943 12:26:58 -- setup/hugepages.sh@89 -- # local node 00:07:15.943 12:26:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:15.943 12:26:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:15.943 12:26:58 -- setup/hugepages.sh@92 -- # local surp 00:07:15.943 12:26:58 -- setup/hugepages.sh@93 -- # local resv 00:07:15.943 12:26:58 -- setup/hugepages.sh@94 -- # local anon 00:07:15.943 12:26:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:15.943 12:26:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:15.943 12:26:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:15.943 12:26:58 -- setup/common.sh@18 -- # local node= 00:07:15.943 12:26:58 -- setup/common.sh@19 -- # local var val 00:07:15.943 12:26:58 -- setup/common.sh@20 -- # local mem_f mem 00:07:15.943 12:26:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.943 12:26:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.943 12:26:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.943 12:26:58 -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.943 12:26:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.943 12:26:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5057696 kB' 'MemAvailable: 9484392 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999808 kB' 'Inactive: 3694048 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139520 kB' 'Active(file): 998760 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 158248 kB' 'Mapped: 67296 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259800 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65088 kB' 'KernelStack: 4464 kB' 'PageTables: 4000 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.943 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.943 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.944 12:26:58 -- setup/common.sh@33 -- # echo 0 00:07:15.944 12:26:58 -- setup/common.sh@33 -- # return 0 00:07:15.944 12:26:58 -- setup/hugepages.sh@97 -- # anon=0 00:07:15.944 12:26:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:15.944 12:26:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:15.944 12:26:58 -- setup/common.sh@18 -- # local node= 00:07:15.944 12:26:58 -- setup/common.sh@19 -- # local var val 00:07:15.944 12:26:58 -- setup/common.sh@20 -- # local mem_f mem 00:07:15.944 12:26:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.944 12:26:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.944 12:26:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.944 12:26:58 -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.944 12:26:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.944 12:26:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5058976 kB' 'MemAvailable: 9485672 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999808 kB' 'Inactive: 3693636 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139108 kB' 'Active(file): 998760 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 157512 kB' 'Mapped: 67296 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259800 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65088 kB' 'KernelStack: 4388 kB' 'PageTables: 3692 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.944 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.944 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.945 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.945 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.946 12:26:58 -- setup/common.sh@33 -- # echo 0 00:07:15.946 12:26:58 -- setup/common.sh@33 -- # return 0 00:07:15.946 12:26:58 -- setup/hugepages.sh@99 -- # surp=0 00:07:15.946 12:26:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:15.946 12:26:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:15.946 12:26:58 -- setup/common.sh@18 -- # local node= 00:07:15.946 12:26:58 -- setup/common.sh@19 -- # local var val 00:07:15.946 12:26:58 -- setup/common.sh@20 -- # local mem_f mem 00:07:15.946 12:26:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.946 12:26:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.946 12:26:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.946 12:26:58 -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.946 12:26:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5058656 kB' 'MemAvailable: 9485352 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999800 kB' 'Inactive: 3693340 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 138812 kB' 'Active(file): 998760 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 157384 kB' 'Mapped: 67320 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259736 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65024 kB' 'KernelStack: 4400 kB' 'PageTables: 3548 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:15.946 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.946 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.205 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 12:26:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.205 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.205 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 12:26:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.205 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.205 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 12:26:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.205 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.205 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 12:26:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.205 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.205 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 12:26:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.205 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 12:26:58 -- setup/common.sh@33 -- # echo 0 00:07:16.206 12:26:58 -- setup/common.sh@33 -- # return 0 00:07:16.206 nr_hugepages=1024 00:07:16.206 resv_hugepages=0 00:07:16.206 surplus_hugepages=0 00:07:16.206 anon_hugepages=0 00:07:16.206 12:26:58 -- setup/hugepages.sh@100 -- # resv=0 00:07:16.206 12:26:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:16.206 12:26:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:16.206 12:26:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:16.206 12:26:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:16.206 12:26:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:16.206 12:26:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:16.206 12:26:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:16.206 12:26:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:16.206 12:26:58 -- setup/common.sh@18 -- # local node= 00:07:16.206 12:26:58 -- setup/common.sh@19 -- # local var val 00:07:16.206 12:26:58 -- setup/common.sh@20 -- # local mem_f mem 00:07:16.206 12:26:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.206 12:26:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:16.206 12:26:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:16.206 12:26:58 -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.206 12:26:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5058656 kB' 'MemAvailable: 9485352 kB' 'Buffers: 35316 kB' 'Cached: 4529632 kB' 'SwapCached: 0 kB' 'Active: 999800 kB' 'Inactive: 3693188 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 138660 kB' 'Active(file): 998760 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 157228 kB' 'Mapped: 67320 kB' 'Shmem: 2596 kB' 'KReclaimable: 194712 kB' 'Slab: 259736 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65024 kB' 'KernelStack: 4368 kB' 'PageTables: 3468 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 481604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 12:26:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.207 12:26:58 -- setup/common.sh@33 -- # echo 1024 00:07:16.207 12:26:58 -- setup/common.sh@33 -- # return 0 00:07:16.207 12:26:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:16.207 12:26:58 -- setup/hugepages.sh@112 -- # get_nodes 00:07:16.207 12:26:58 -- setup/hugepages.sh@27 -- # local node 00:07:16.207 12:26:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:16.207 12:26:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:16.207 12:26:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:16.207 12:26:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:16.207 12:26:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:16.207 12:26:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:16.207 12:26:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:16.207 12:26:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:16.207 12:26:58 -- setup/common.sh@18 -- # local node=0 00:07:16.207 12:26:58 -- setup/common.sh@19 -- # local var val 00:07:16.207 12:26:58 -- setup/common.sh@20 -- # local mem_f mem 00:07:16.207 12:26:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.207 12:26:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:16.207 12:26:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:16.208 12:26:58 -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.208 12:26:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.208 12:26:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5058656 kB' 'MemUsed: 7184324 kB' 'SwapCached: 0 kB' 'Active: 999800 kB' 'Inactive: 3693292 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 138764 kB' 'Active(file): 998760 kB' 'Inactive(file): 3554528 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'FilePages: 4564948 kB' 'Mapped: 67320 kB' 'AnonPages: 157292 kB' 'Shmem: 2596 kB' 'KernelStack: 4404 kB' 'PageTables: 3412 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194712 kB' 'Slab: 259736 kB' 'SReclaimable: 194712 kB' 'SUnreclaim: 65024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # continue 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 12:26:58 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 12:26:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.208 12:26:58 -- setup/common.sh@33 -- # echo 0 00:07:16.208 12:26:58 -- setup/common.sh@33 -- # return 0 00:07:16.208 12:26:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:16.208 12:26:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:16.208 12:26:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:16.208 12:26:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:16.208 12:26:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:16.208 node0=1024 expecting 1024 00:07:16.208 12:26:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:16.209 00:07:16.209 real 0m1.673s 00:07:16.209 user 0m0.695s 00:07:16.209 sys 0m0.890s 00:07:16.209 12:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.209 12:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:16.209 ************************************ 00:07:16.209 END TEST no_shrink_alloc 00:07:16.209 ************************************ 00:07:16.209 12:26:58 -- setup/hugepages.sh@217 -- # clear_hp 00:07:16.209 12:26:58 -- setup/hugepages.sh@37 -- # local node hp 00:07:16.209 12:26:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:16.209 12:26:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:16.209 12:26:58 -- setup/hugepages.sh@41 -- # echo 0 00:07:16.209 12:26:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:16.209 12:26:58 -- setup/hugepages.sh@41 -- # echo 0 00:07:16.209 12:26:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:16.209 12:26:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:16.209 ************************************ 00:07:16.209 END TEST hugepages 00:07:16.209 ************************************ 00:07:16.209 00:07:16.209 real 0m8.077s 00:07:16.209 user 0m2.743s 00:07:16.209 sys 0m5.417s 00:07:16.209 12:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.209 12:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:16.209 12:26:58 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:16.209 12:26:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.209 12:26:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.209 12:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:16.209 ************************************ 00:07:16.209 START TEST driver 00:07:16.209 ************************************ 00:07:16.209 12:26:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:16.466 * Looking for test storage... 00:07:16.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:16.466 12:26:58 -- setup/driver.sh@68 -- # setup reset 00:07:16.466 12:26:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:16.466 12:26:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:17.033 12:26:59 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:17.033 12:26:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.033 12:26:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.033 12:26:59 -- common/autotest_common.sh@10 -- # set +x 00:07:17.033 ************************************ 00:07:17.033 START TEST guess_driver 00:07:17.033 ************************************ 00:07:17.033 12:26:59 -- common/autotest_common.sh@1104 -- # guess_driver 00:07:17.033 12:26:59 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:17.033 12:26:59 -- setup/driver.sh@47 -- # local fail=0 00:07:17.033 12:26:59 -- setup/driver.sh@49 -- # pick_driver 00:07:17.033 12:26:59 -- setup/driver.sh@36 -- # vfio 00:07:17.033 12:26:59 -- setup/driver.sh@21 -- # local iommu_grups 00:07:17.033 12:26:59 -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:17.033 12:26:59 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:17.033 12:26:59 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:07:17.033 12:26:59 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:17.033 12:26:59 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:07:17.033 12:26:59 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:07:17.033 12:26:59 -- setup/driver.sh@32 -- # return 1 00:07:17.033 12:26:59 -- setup/driver.sh@38 -- # uio 00:07:17.033 12:26:59 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:07:17.033 12:26:59 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:07:17.033 12:26:59 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:07:17.033 12:26:59 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:07:17.033 12:26:59 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:07:17.033 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:07:17.033 12:26:59 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:07:17.033 12:26:59 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:07:17.033 12:26:59 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:17.033 12:26:59 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:07:17.033 Looking for driver=uio_pci_generic 00:07:17.033 12:26:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:17.033 12:26:59 -- setup/driver.sh@45 -- # setup output config 00:07:17.033 12:26:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:17.033 12:26:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:17.598 12:26:59 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:07:17.598 12:26:59 -- setup/driver.sh@58 -- # continue 00:07:17.598 12:26:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:17.598 12:27:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:17.598 12:27:00 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:17.598 12:27:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:18.589 12:27:01 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:18.589 12:27:01 -- setup/driver.sh@65 -- # setup reset 00:07:18.589 12:27:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:18.589 12:27:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:19.524 00:07:19.524 real 0m2.259s 00:07:19.524 user 0m0.507s 00:07:19.524 sys 0m1.789s 00:07:19.524 12:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.524 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.524 ************************************ 00:07:19.525 END TEST guess_driver 00:07:19.525 ************************************ 00:07:19.525 00:07:19.525 real 0m3.068s 00:07:19.525 user 0m0.833s 00:07:19.525 sys 0m2.292s 00:07:19.525 12:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.525 ************************************ 00:07:19.525 END TEST driver 00:07:19.525 ************************************ 00:07:19.525 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.525 12:27:01 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:19.525 12:27:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.525 12:27:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.525 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.525 ************************************ 00:07:19.525 START TEST devices 00:07:19.525 ************************************ 00:07:19.525 12:27:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:19.525 * Looking for test storage... 00:07:19.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:19.525 12:27:01 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:19.525 12:27:01 -- setup/devices.sh@192 -- # setup reset 00:07:19.525 12:27:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:19.525 12:27:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:20.092 12:27:02 -- setup/devices.sh@194 -- # get_zoned_devs 00:07:20.092 12:27:02 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:07:20.092 12:27:02 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:07:20.092 12:27:02 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:07:20.092 12:27:02 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:07:20.092 12:27:02 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:07:20.092 12:27:02 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:07:20.092 12:27:02 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:20.092 12:27:02 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:07:20.092 12:27:02 -- setup/devices.sh@196 -- # blocks=() 00:07:20.092 12:27:02 -- setup/devices.sh@196 -- # declare -a blocks 00:07:20.092 12:27:02 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:20.092 12:27:02 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:20.092 12:27:02 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:20.092 12:27:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:20.092 12:27:02 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:20.092 12:27:02 -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:20.092 12:27:02 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:07:20.092 12:27:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:07:20.092 12:27:02 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:20.092 12:27:02 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:07:20.092 12:27:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:07:20.092 No valid GPT data, bailing 00:07:20.092 12:27:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:20.350 12:27:02 -- scripts/common.sh@393 -- # pt= 00:07:20.350 12:27:02 -- scripts/common.sh@394 -- # return 1 00:07:20.350 12:27:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:20.350 12:27:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:20.350 12:27:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:20.351 12:27:02 -- setup/common.sh@80 -- # echo 5368709120 00:07:20.351 12:27:02 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:07:20.351 12:27:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:20.351 12:27:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:07:20.351 12:27:02 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:20.351 12:27:02 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:20.351 12:27:02 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:20.351 12:27:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.351 12:27:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.351 12:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:20.351 ************************************ 00:07:20.351 START TEST nvme_mount 00:07:20.351 ************************************ 00:07:20.351 12:27:02 -- common/autotest_common.sh@1104 -- # nvme_mount 00:07:20.351 12:27:02 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:20.351 12:27:02 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:20.351 12:27:02 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:20.351 12:27:02 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:20.351 12:27:02 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:20.351 12:27:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:20.351 12:27:02 -- setup/common.sh@40 -- # local part_no=1 00:07:20.351 12:27:02 -- setup/common.sh@41 -- # local size=1073741824 00:07:20.351 12:27:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:20.351 12:27:02 -- setup/common.sh@44 -- # parts=() 00:07:20.351 12:27:02 -- setup/common.sh@44 -- # local parts 00:07:20.351 12:27:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:07:20.351 12:27:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:20.351 12:27:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:20.351 12:27:02 -- setup/common.sh@46 -- # (( part++ )) 00:07:20.351 12:27:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:20.351 12:27:02 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:20.351 12:27:02 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:20.351 12:27:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:21.287 Creating new GPT entries in memory. 00:07:21.287 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:21.287 other utilities. 00:07:21.287 12:27:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:07:21.287 12:27:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:21.287 12:27:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:21.287 12:27:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:21.287 12:27:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:22.223 Creating new GPT entries in memory. 00:07:22.223 The operation has completed successfully. 00:07:22.223 12:27:04 -- setup/common.sh@57 -- # (( part++ )) 00:07:22.223 12:27:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:22.223 12:27:04 -- setup/common.sh@62 -- # wait 96785 00:07:22.223 12:27:04 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:22.223 12:27:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:07:22.223 12:27:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:22.223 12:27:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:22.223 12:27:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:22.223 12:27:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:22.481 12:27:04 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:22.481 12:27:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:07:22.481 12:27:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:22.481 12:27:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:22.481 12:27:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:22.481 12:27:04 -- setup/devices.sh@53 -- # local found=0 00:07:22.481 12:27:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:22.481 12:27:04 -- setup/devices.sh@56 -- # : 00:07:22.481 12:27:04 -- setup/devices.sh@59 -- # local pci status 00:07:22.481 12:27:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.481 12:27:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:07:22.481 12:27:04 -- setup/devices.sh@47 -- # setup output config 00:07:22.481 12:27:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:22.481 12:27:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:22.739 12:27:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:22.739 12:27:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:22.739 12:27:05 -- setup/devices.sh@63 -- # found=1 00:07:22.739 12:27:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.739 12:27:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:22.739 12:27:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:22.739 12:27:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:22.739 12:27:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.674 12:27:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:23.674 12:27:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:23.674 12:27:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:23.674 12:27:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:23.674 12:27:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:23.674 12:27:06 -- setup/devices.sh@110 -- # cleanup_nvme 00:07:23.674 12:27:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:23.674 12:27:06 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:23.674 12:27:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:23.674 12:27:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:23.933 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:23.933 12:27:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:23.933 12:27:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:23.933 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:23.933 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:23.933 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:23.933 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:23.933 12:27:06 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:07:23.933 12:27:06 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:07:23.933 12:27:06 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:23.933 12:27:06 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:23.933 12:27:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:23.933 12:27:06 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:23.933 12:27:06 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:23.933 12:27:06 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:07:23.933 12:27:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:23.933 12:27:06 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:23.933 12:27:06 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:23.933 12:27:06 -- setup/devices.sh@53 -- # local found=0 00:07:23.933 12:27:06 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:23.933 12:27:06 -- setup/devices.sh@56 -- # : 00:07:23.933 12:27:06 -- setup/devices.sh@59 -- # local pci status 00:07:23.933 12:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.933 12:27:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:07:23.933 12:27:06 -- setup/devices.sh@47 -- # setup output config 00:07:23.933 12:27:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:23.933 12:27:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:24.192 12:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:24.192 12:27:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:24.192 12:27:06 -- setup/devices.sh@63 -- # found=1 00:07:24.192 12:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:24.192 12:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:24.192 12:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:24.192 12:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:24.192 12:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.566 12:27:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:25.566 12:27:07 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:25.566 12:27:07 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:25.566 12:27:08 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:25.566 12:27:08 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:25.566 12:27:08 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:25.566 12:27:08 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:07:25.566 12:27:08 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:07:25.566 12:27:08 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:25.566 12:27:08 -- setup/devices.sh@50 -- # local mount_point= 00:07:25.566 12:27:08 -- setup/devices.sh@51 -- # local test_file= 00:07:25.566 12:27:08 -- setup/devices.sh@53 -- # local found=0 00:07:25.566 12:27:08 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:25.566 12:27:08 -- setup/devices.sh@59 -- # local pci status 00:07:25.566 12:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.566 12:27:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:07:25.566 12:27:08 -- setup/devices.sh@47 -- # setup output config 00:07:25.566 12:27:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:25.566 12:27:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:25.824 12:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:25.824 12:27:08 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:25.824 12:27:08 -- setup/devices.sh@63 -- # found=1 00:07:25.824 12:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.824 12:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:25.824 12:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:26.083 12:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:26.083 12:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.018 12:27:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:27.018 12:27:09 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:27.018 12:27:09 -- setup/devices.sh@68 -- # return 0 00:07:27.018 12:27:09 -- setup/devices.sh@128 -- # cleanup_nvme 00:07:27.018 12:27:09 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:27.018 12:27:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:27.018 12:27:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:27.018 12:27:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:27.018 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:27.018 00:07:27.018 real 0m6.776s 00:07:27.018 user 0m0.747s 00:07:27.018 sys 0m4.065s 00:07:27.018 12:27:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.018 ************************************ 00:07:27.018 END TEST nvme_mount 00:07:27.018 12:27:09 -- common/autotest_common.sh@10 -- # set +x 00:07:27.018 ************************************ 00:07:27.018 12:27:09 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:27.018 12:27:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:27.018 12:27:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.018 12:27:09 -- common/autotest_common.sh@10 -- # set +x 00:07:27.018 ************************************ 00:07:27.018 START TEST dm_mount 00:07:27.018 ************************************ 00:07:27.018 12:27:09 -- common/autotest_common.sh@1104 -- # dm_mount 00:07:27.018 12:27:09 -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:27.018 12:27:09 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:27.018 12:27:09 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:27.018 12:27:09 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:27.018 12:27:09 -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:27.018 12:27:09 -- setup/common.sh@40 -- # local part_no=2 00:07:27.018 12:27:09 -- setup/common.sh@41 -- # local size=1073741824 00:07:27.018 12:27:09 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:27.018 12:27:09 -- setup/common.sh@44 -- # parts=() 00:07:27.018 12:27:09 -- setup/common.sh@44 -- # local parts 00:07:27.018 12:27:09 -- setup/common.sh@46 -- # (( part = 1 )) 00:07:27.018 12:27:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:27.018 12:27:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:27.018 12:27:09 -- setup/common.sh@46 -- # (( part++ )) 00:07:27.018 12:27:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:27.018 12:27:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:27.018 12:27:09 -- setup/common.sh@46 -- # (( part++ )) 00:07:27.018 12:27:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:27.018 12:27:09 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:27.018 12:27:09 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:27.018 12:27:09 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:28.391 Creating new GPT entries in memory. 00:07:28.391 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:28.391 other utilities. 00:07:28.391 12:27:10 -- setup/common.sh@57 -- # (( part = 1 )) 00:07:28.391 12:27:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:28.391 12:27:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:28.391 12:27:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:28.391 12:27:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:29.342 Creating new GPT entries in memory. 00:07:29.342 The operation has completed successfully. 00:07:29.342 12:27:11 -- setup/common.sh@57 -- # (( part++ )) 00:07:29.342 12:27:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:29.342 12:27:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:29.342 12:27:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:29.342 12:27:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:07:30.271 The operation has completed successfully. 00:07:30.271 12:27:12 -- setup/common.sh@57 -- # (( part++ )) 00:07:30.271 12:27:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:30.271 12:27:12 -- setup/common.sh@62 -- # wait 97277 00:07:30.271 12:27:12 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:30.271 12:27:12 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:30.271 12:27:12 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:30.271 12:27:12 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:30.271 12:27:12 -- setup/devices.sh@160 -- # for t in {1..5} 00:07:30.271 12:27:12 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:30.271 12:27:12 -- setup/devices.sh@161 -- # break 00:07:30.271 12:27:12 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:30.272 12:27:12 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:30.272 12:27:12 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:30.272 12:27:12 -- setup/devices.sh@166 -- # dm=dm-0 00:07:30.272 12:27:12 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:30.272 12:27:12 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:30.272 12:27:12 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:30.272 12:27:12 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:07:30.272 12:27:12 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:30.272 12:27:12 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:30.272 12:27:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:30.272 12:27:12 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:30.528 12:27:12 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:30.528 12:27:12 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:07:30.528 12:27:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:30.528 12:27:12 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:30.528 12:27:12 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:30.528 12:27:12 -- setup/devices.sh@53 -- # local found=0 00:07:30.528 12:27:12 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:30.528 12:27:12 -- setup/devices.sh@56 -- # : 00:07:30.528 12:27:12 -- setup/devices.sh@59 -- # local pci status 00:07:30.528 12:27:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:30.528 12:27:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:07:30.528 12:27:12 -- setup/devices.sh@47 -- # setup output config 00:07:30.528 12:27:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:30.528 12:27:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:30.786 12:27:13 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:30.786 12:27:13 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:30.786 12:27:13 -- setup/devices.sh@63 -- # found=1 00:07:30.786 12:27:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:30.786 12:27:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:30.786 12:27:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:30.786 12:27:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:30.786 12:27:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:32.687 12:27:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:32.687 12:27:14 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:07:32.687 12:27:14 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:32.687 12:27:14 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:32.687 12:27:14 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:32.687 12:27:14 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:32.687 12:27:14 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:32.687 12:27:15 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:07:32.687 12:27:15 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:32.687 12:27:15 -- setup/devices.sh@50 -- # local mount_point= 00:07:32.687 12:27:15 -- setup/devices.sh@51 -- # local test_file= 00:07:32.687 12:27:15 -- setup/devices.sh@53 -- # local found=0 00:07:32.687 12:27:15 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:32.687 12:27:15 -- setup/devices.sh@59 -- # local pci status 00:07:32.687 12:27:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:32.687 12:27:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:07:32.687 12:27:15 -- setup/devices.sh@47 -- # setup output config 00:07:32.687 12:27:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:32.687 12:27:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:32.945 12:27:15 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:32.945 12:27:15 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:32.945 12:27:15 -- setup/devices.sh@63 -- # found=1 00:07:32.945 12:27:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:32.945 12:27:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:32.945 12:27:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:32.945 12:27:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:07:32.945 12:27:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:34.846 12:27:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:34.846 12:27:17 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:34.846 12:27:17 -- setup/devices.sh@68 -- # return 0 00:07:34.846 12:27:17 -- setup/devices.sh@187 -- # cleanup_dm 00:07:34.846 12:27:17 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:34.846 12:27:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:34.846 12:27:17 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:34.846 12:27:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:34.846 12:27:17 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:34.846 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:34.846 12:27:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:34.846 12:27:17 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:34.846 00:07:34.846 real 0m7.759s 00:07:34.846 user 0m0.531s 00:07:34.846 sys 0m4.068s 00:07:34.846 12:27:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.846 12:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:34.846 ************************************ 00:07:34.846 END TEST dm_mount 00:07:34.846 ************************************ 00:07:34.846 12:27:17 -- setup/devices.sh@1 -- # cleanup 00:07:34.846 12:27:17 -- setup/devices.sh@11 -- # cleanup_nvme 00:07:34.846 12:27:17 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:34.846 12:27:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:34.846 12:27:17 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:34.846 12:27:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:34.846 12:27:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:35.104 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:35.104 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:35.104 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:35.104 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:35.104 12:27:17 -- setup/devices.sh@12 -- # cleanup_dm 00:07:35.104 12:27:17 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:35.104 12:27:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:35.104 12:27:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:35.104 12:27:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:35.104 12:27:17 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:35.104 12:27:17 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:35.104 00:07:35.104 real 0m15.592s 00:07:35.104 user 0m1.774s 00:07:35.104 sys 0m8.678s 00:07:35.104 12:27:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.104 ************************************ 00:07:35.104 END TEST devices 00:07:35.104 12:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:35.104 ************************************ 00:07:35.104 00:07:35.104 real 0m33.817s 00:07:35.104 user 0m7.282s 00:07:35.104 sys 0m21.684s 00:07:35.104 12:27:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.104 12:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:35.104 ************************************ 00:07:35.104 END TEST setup.sh 00:07:35.104 ************************************ 00:07:35.104 12:27:17 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:35.362 Hugepages 00:07:35.362 node hugesize free / total 00:07:35.362 node0 1048576kB 0 / 0 00:07:35.362 node0 2048kB 2048 / 2048 00:07:35.362 00:07:35.362 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:35.362 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:35.620 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:35.620 12:27:17 -- spdk/autotest.sh@141 -- # uname -s 00:07:35.620 12:27:17 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:07:35.620 12:27:17 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:07:35.620 12:27:17 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:07:36.185 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.123 12:27:19 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:38.056 12:27:20 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:38.056 12:27:20 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:38.056 12:27:20 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:07:38.056 12:27:20 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:07:38.056 12:27:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:38.056 12:27:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:38.056 12:27:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:38.056 12:27:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:38.056 12:27:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:38.056 12:27:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:07:38.056 12:27:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:07:38.056 12:27:20 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:38.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:07:38.573 Waiting for block devices as requested 00:07:38.573 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.573 12:27:21 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:07:38.573 12:27:21 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:07:38.573 12:27:21 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:07:38.573 12:27:21 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:07:38.573 12:27:21 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:07:38.573 12:27:21 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:07:38.573 12:27:21 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:07:38.573 12:27:21 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:38.573 12:27:21 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:07:38.573 12:27:21 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:07:38.573 12:27:21 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:07:38.573 12:27:21 -- common/autotest_common.sh@1530 -- # grep oacs 00:07:38.573 12:27:21 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:07:38.573 12:27:21 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:07:38.573 12:27:21 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:07:38.573 12:27:21 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:07:38.573 12:27:21 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:07:38.573 12:27:21 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:07:38.573 12:27:21 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:07:38.573 12:27:21 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:07:38.573 12:27:21 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:07:38.573 12:27:21 -- common/autotest_common.sh@1542 -- # continue 00:07:38.573 12:27:21 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:07:38.573 12:27:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:38.573 12:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:38.573 12:27:21 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:07:38.573 12:27:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:38.573 12:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:38.573 12:27:21 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:07:39.140 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.077 12:27:22 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:07:40.077 12:27:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:40.077 12:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:40.337 12:27:22 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:07:40.337 12:27:22 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:40.337 12:27:22 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:40.337 12:27:22 -- common/autotest_common.sh@1562 -- # bdfs=() 00:07:40.337 12:27:22 -- common/autotest_common.sh@1562 -- # local bdfs 00:07:40.337 12:27:22 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:40.337 12:27:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:40.337 12:27:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:40.337 12:27:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:40.337 12:27:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:40.337 12:27:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:40.337 12:27:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:07:40.337 12:27:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:07:40.337 12:27:22 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:07:40.338 12:27:22 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:07:40.338 12:27:22 -- common/autotest_common.sh@1565 -- # device=0x0010 00:07:40.338 12:27:22 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:40.338 12:27:22 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:07:40.338 12:27:22 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:07:40.338 12:27:22 -- common/autotest_common.sh@1578 -- # return 0 00:07:40.338 12:27:22 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:07:40.338 12:27:22 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:07:40.338 12:27:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.338 12:27:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.338 12:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:40.338 ************************************ 00:07:40.338 START TEST unittest 00:07:40.338 ************************************ 00:07:40.338 12:27:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:07:40.338 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:07:40.338 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:07:40.338 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:07:40.338 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:07:40.338 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:07:40.338 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:40.338 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:40.338 ++ rpc_py=rpc_cmd 00:07:40.338 ++ set -e 00:07:40.338 ++ shopt -s nullglob 00:07:40.338 ++ shopt -s extglob 00:07:40.338 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:40.338 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:40.338 +++ CONFIG_WPDK_DIR= 00:07:40.338 +++ CONFIG_ASAN=y 00:07:40.338 +++ CONFIG_VBDEV_COMPRESS=n 00:07:40.338 +++ CONFIG_HAVE_EXECINFO_H=y 00:07:40.338 +++ CONFIG_USDT=n 00:07:40.338 +++ CONFIG_CUSTOMOCF=n 00:07:40.338 +++ CONFIG_PREFIX=/usr/local 00:07:40.338 +++ CONFIG_RBD=n 00:07:40.338 +++ CONFIG_LIBDIR= 00:07:40.338 +++ CONFIG_IDXD=y 00:07:40.338 +++ CONFIG_NVME_CUSE=y 00:07:40.338 +++ CONFIG_SMA=n 00:07:40.338 +++ CONFIG_VTUNE=n 00:07:40.338 +++ CONFIG_TSAN=n 00:07:40.338 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:40.338 +++ CONFIG_VFIO_USER_DIR= 00:07:40.338 +++ CONFIG_PGO_CAPTURE=n 00:07:40.338 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:40.338 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:40.338 +++ CONFIG_LTO=n 00:07:40.338 +++ CONFIG_ISCSI_INITIATOR=y 00:07:40.338 +++ CONFIG_CET=n 00:07:40.338 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:40.338 +++ CONFIG_OCF_PATH= 00:07:40.338 +++ CONFIG_RDMA_SET_TOS=y 00:07:40.338 +++ CONFIG_HAVE_ARC4RANDOM=n 00:07:40.338 +++ CONFIG_HAVE_LIBARCHIVE=n 00:07:40.338 +++ CONFIG_UBLK=n 00:07:40.338 +++ CONFIG_ISAL_CRYPTO=y 00:07:40.338 +++ CONFIG_OPENSSL_PATH= 00:07:40.338 +++ CONFIG_OCF=n 00:07:40.338 +++ CONFIG_FUSE=n 00:07:40.338 +++ CONFIG_VTUNE_DIR= 00:07:40.338 +++ CONFIG_FUZZER_LIB= 00:07:40.338 +++ CONFIG_FUZZER=n 00:07:40.338 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:40.338 +++ CONFIG_CRYPTO=n 00:07:40.338 +++ CONFIG_PGO_USE=n 00:07:40.338 +++ CONFIG_VHOST=y 00:07:40.338 +++ CONFIG_DAOS=n 00:07:40.338 +++ CONFIG_DPDK_INC_DIR= 00:07:40.338 +++ CONFIG_DAOS_DIR= 00:07:40.338 +++ CONFIG_UNIT_TESTS=y 00:07:40.338 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:40.338 +++ CONFIG_VIRTIO=y 00:07:40.338 +++ CONFIG_COVERAGE=y 00:07:40.338 +++ CONFIG_RDMA=y 00:07:40.338 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:40.338 +++ CONFIG_URING_PATH= 00:07:40.338 +++ CONFIG_XNVME=n 00:07:40.338 +++ CONFIG_VFIO_USER=n 00:07:40.338 +++ CONFIG_ARCH=native 00:07:40.338 +++ CONFIG_URING_ZNS=n 00:07:40.338 +++ CONFIG_WERROR=y 00:07:40.338 +++ CONFIG_HAVE_LIBBSD=n 00:07:40.338 +++ CONFIG_UBSAN=y 00:07:40.338 +++ CONFIG_IPSEC_MB_DIR= 00:07:40.338 +++ CONFIG_GOLANG=n 00:07:40.338 +++ CONFIG_ISAL=y 00:07:40.338 +++ CONFIG_IDXD_KERNEL=n 00:07:40.338 +++ CONFIG_DPDK_LIB_DIR= 00:07:40.338 +++ CONFIG_RDMA_PROV=verbs 00:07:40.338 +++ CONFIG_APPS=y 00:07:40.338 +++ CONFIG_SHARED=n 00:07:40.338 +++ CONFIG_FC_PATH= 00:07:40.338 +++ CONFIG_DPDK_PKG_CONFIG=n 00:07:40.338 +++ CONFIG_FC=n 00:07:40.338 +++ CONFIG_AVAHI=n 00:07:40.338 +++ CONFIG_FIO_PLUGIN=y 00:07:40.338 +++ CONFIG_RAID5F=y 00:07:40.338 +++ CONFIG_EXAMPLES=y 00:07:40.338 +++ CONFIG_TESTS=y 00:07:40.338 +++ CONFIG_CRYPTO_MLX5=n 00:07:40.338 +++ CONFIG_MAX_LCORES= 00:07:40.338 +++ CONFIG_IPSEC_MB=n 00:07:40.338 +++ CONFIG_DEBUG=y 00:07:40.338 +++ CONFIG_DPDK_COMPRESSDEV=n 00:07:40.338 +++ CONFIG_CROSS_PREFIX= 00:07:40.338 +++ CONFIG_URING=n 00:07:40.338 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:40.338 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:40.338 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:40.338 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:40.338 +++ _root=/home/vagrant/spdk_repo/spdk 00:07:40.338 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:40.338 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:40.338 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:40.338 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:40.338 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:40.338 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:40.338 +++ VHOST_APP=("$_app_dir/vhost") 00:07:40.338 +++ DD_APP=("$_app_dir/spdk_dd") 00:07:40.338 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:07:40.338 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:40.338 +++ [[ #ifndef SPDK_CONFIG_H 00:07:40.338 #define SPDK_CONFIG_H 00:07:40.338 #define SPDK_CONFIG_APPS 1 00:07:40.338 #define SPDK_CONFIG_ARCH native 00:07:40.338 #define SPDK_CONFIG_ASAN 1 00:07:40.338 #undef SPDK_CONFIG_AVAHI 00:07:40.338 #undef SPDK_CONFIG_CET 00:07:40.338 #define SPDK_CONFIG_COVERAGE 1 00:07:40.338 #define SPDK_CONFIG_CROSS_PREFIX 00:07:40.338 #undef SPDK_CONFIG_CRYPTO 00:07:40.338 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:40.338 #undef SPDK_CONFIG_CUSTOMOCF 00:07:40.338 #undef SPDK_CONFIG_DAOS 00:07:40.338 #define SPDK_CONFIG_DAOS_DIR 00:07:40.338 #define SPDK_CONFIG_DEBUG 1 00:07:40.338 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:40.338 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:40.338 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:40.338 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:40.338 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:40.338 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:40.338 #define SPDK_CONFIG_EXAMPLES 1 00:07:40.338 #undef SPDK_CONFIG_FC 00:07:40.338 #define SPDK_CONFIG_FC_PATH 00:07:40.338 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:40.338 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:40.338 #undef SPDK_CONFIG_FUSE 00:07:40.338 #undef SPDK_CONFIG_FUZZER 00:07:40.338 #define SPDK_CONFIG_FUZZER_LIB 00:07:40.338 #undef SPDK_CONFIG_GOLANG 00:07:40.338 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:07:40.338 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:40.338 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:40.338 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:40.338 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:40.338 #define SPDK_CONFIG_IDXD 1 00:07:40.338 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:40.338 #undef SPDK_CONFIG_IPSEC_MB 00:07:40.338 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:40.338 #define SPDK_CONFIG_ISAL 1 00:07:40.338 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:40.338 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:40.338 #define SPDK_CONFIG_LIBDIR 00:07:40.338 #undef SPDK_CONFIG_LTO 00:07:40.338 #define SPDK_CONFIG_MAX_LCORES 00:07:40.338 #define SPDK_CONFIG_NVME_CUSE 1 00:07:40.338 #undef SPDK_CONFIG_OCF 00:07:40.338 #define SPDK_CONFIG_OCF_PATH 00:07:40.338 #define SPDK_CONFIG_OPENSSL_PATH 00:07:40.338 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:40.338 #undef SPDK_CONFIG_PGO_USE 00:07:40.338 #define SPDK_CONFIG_PREFIX /usr/local 00:07:40.338 #define SPDK_CONFIG_RAID5F 1 00:07:40.338 #undef SPDK_CONFIG_RBD 00:07:40.338 #define SPDK_CONFIG_RDMA 1 00:07:40.338 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:40.338 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:40.338 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:40.338 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:40.338 #undef SPDK_CONFIG_SHARED 00:07:40.338 #undef SPDK_CONFIG_SMA 00:07:40.338 #define SPDK_CONFIG_TESTS 1 00:07:40.338 #undef SPDK_CONFIG_TSAN 00:07:40.338 #undef SPDK_CONFIG_UBLK 00:07:40.338 #define SPDK_CONFIG_UBSAN 1 00:07:40.338 #define SPDK_CONFIG_UNIT_TESTS 1 00:07:40.338 #undef SPDK_CONFIG_URING 00:07:40.338 #define SPDK_CONFIG_URING_PATH 00:07:40.338 #undef SPDK_CONFIG_URING_ZNS 00:07:40.338 #undef SPDK_CONFIG_USDT 00:07:40.338 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:40.338 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:40.338 #undef SPDK_CONFIG_VFIO_USER 00:07:40.338 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:40.338 #define SPDK_CONFIG_VHOST 1 00:07:40.338 #define SPDK_CONFIG_VIRTIO 1 00:07:40.338 #undef SPDK_CONFIG_VTUNE 00:07:40.338 #define SPDK_CONFIG_VTUNE_DIR 00:07:40.338 #define SPDK_CONFIG_WERROR 1 00:07:40.338 #define SPDK_CONFIG_WPDK_DIR 00:07:40.338 #undef SPDK_CONFIG_XNVME 00:07:40.338 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:40.338 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:40.338 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.338 +++ [[ -e /bin/wpdk_common.sh ]] 00:07:40.338 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.338 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.338 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:40.339 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:40.339 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:40.339 ++++ export PATH 00:07:40.339 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:40.339 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:40.339 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:40.339 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:40.339 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:40.339 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:40.339 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:40.339 +++ TEST_TAG=N/A 00:07:40.339 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:40.339 ++ : 1 00:07:40.339 ++ export RUN_NIGHTLY 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_RUN_VALGRIND 00:07:40.339 ++ : 1 00:07:40.339 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:07:40.339 ++ : 1 00:07:40.339 ++ export SPDK_TEST_UNITTEST 00:07:40.339 ++ : 00:07:40.339 ++ export SPDK_TEST_AUTOBUILD 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_RELEASE_BUILD 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_ISAL 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_ISCSI 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_ISCSI_INITIATOR 00:07:40.339 ++ : 1 00:07:40.339 ++ export SPDK_TEST_NVME 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_NVME_PMR 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_NVME_BP 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_NVME_CLI 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_NVME_CUSE 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_NVME_FDP 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_NVMF 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_VFIOUSER 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_VFIOUSER_QEMU 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_FUZZER 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_FUZZER_SHORT 00:07:40.339 ++ : rdma 00:07:40.339 ++ export SPDK_TEST_NVMF_TRANSPORT 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_RBD 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_VHOST 00:07:40.339 ++ : 1 00:07:40.339 ++ export SPDK_TEST_BLOCKDEV 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_IOAT 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_BLOBFS 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_VHOST_INIT 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_LVOL 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_VBDEV_COMPRESS 00:07:40.339 ++ : 1 00:07:40.339 ++ export SPDK_RUN_ASAN 00:07:40.339 ++ : 1 00:07:40.339 ++ export SPDK_RUN_UBSAN 00:07:40.339 ++ : 00:07:40.339 ++ export SPDK_RUN_EXTERNAL_DPDK 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_RUN_NON_ROOT 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_CRYPTO 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_FTL 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_OCF 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_VMD 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_OPAL 00:07:40.339 ++ : 00:07:40.339 ++ export SPDK_TEST_NATIVE_DPDK 00:07:40.339 ++ : true 00:07:40.339 ++ export SPDK_AUTOTEST_X 00:07:40.339 ++ : 1 00:07:40.339 ++ export SPDK_TEST_RAID5 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_URING 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_USDT 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_USE_IGB_UIO 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_SCHEDULER 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_SCANBUILD 00:07:40.339 ++ : 00:07:40.339 ++ export SPDK_TEST_NVMF_NICS 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_SMA 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_DAOS 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_XNVME 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_ACCEL_DSA 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_ACCEL_IAA 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_ACCEL_IOAT 00:07:40.339 ++ : 00:07:40.339 ++ export SPDK_TEST_FUZZER_TARGET 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_TEST_NVMF_MDNS 00:07:40.339 ++ : 0 00:07:40.339 ++ export SPDK_JSONRPC_GO_CLIENT 00:07:40.339 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:40.339 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:40.339 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:40.339 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:40.339 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.339 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.339 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.339 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:40.339 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:40.339 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:07:40.339 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:40.339 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:40.339 ++ export PYTHONDONTWRITEBYTECODE=1 00:07:40.339 ++ PYTHONDONTWRITEBYTECODE=1 00:07:40.339 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:40.339 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:40.339 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:40.339 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:40.339 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:07:40.339 ++ rm -rf /var/tmp/asan_suppression_file 00:07:40.339 ++ cat 00:07:40.339 ++ echo leak:libfuse3.so 00:07:40.339 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:40.339 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:40.339 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:40.339 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:40.339 ++ '[' -z /var/spdk/dependencies ']' 00:07:40.339 ++ export DEPENDENCY_DIR 00:07:40.339 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:40.339 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:40.339 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:40.339 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:40.339 ++ export QEMU_BIN= 00:07:40.339 ++ QEMU_BIN= 00:07:40.339 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:07:40.339 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:07:40.339 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:40.339 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:40.339 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.339 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.339 ++ '[' 0 -eq 0 ']' 00:07:40.339 ++ export valgrind= 00:07:40.339 ++ valgrind= 00:07:40.339 +++ uname -s 00:07:40.339 ++ '[' Linux = Linux ']' 00:07:40.339 ++ HUGEMEM=4096 00:07:40.339 ++ export CLEAR_HUGE=yes 00:07:40.339 ++ CLEAR_HUGE=yes 00:07:40.339 ++ [[ 0 -eq 1 ]] 00:07:40.339 ++ [[ 0 -eq 1 ]] 00:07:40.339 ++ MAKE=make 00:07:40.339 +++ nproc 00:07:40.339 ++ MAKEFLAGS=-j10 00:07:40.339 ++ export HUGEMEM=4096 00:07:40.339 ++ HUGEMEM=4096 00:07:40.339 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:40.339 ++ NO_HUGE=() 00:07:40.339 ++ TEST_MODE= 00:07:40.339 ++ [[ -z '' ]] 00:07:40.339 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:07:40.339 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:07:40.339 ++ exec 00:07:40.339 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:07:40.339 ++ set_test_storage 2147483648 00:07:40.339 ++ [[ -v testdir ]] 00:07:40.339 ++ local requested_size=2147483648 00:07:40.339 ++ local mount target_dir 00:07:40.339 ++ local -A mounts fss sizes avails uses 00:07:40.339 ++ local source fs size avail mount use 00:07:40.339 ++ local storage_fallback storage_candidates 00:07:40.339 +++ mktemp -udt spdk.XXXXXX 00:07:40.339 ++ storage_fallback=/tmp/spdk.OKO1iZ 00:07:40.339 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:40.339 ++ [[ -n '' ]] 00:07:40.339 ++ [[ -n '' ]] 00:07:40.339 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.OKO1iZ/tests/unit /tmp/spdk.OKO1iZ 00:07:40.339 ++ requested_size=2214592512 00:07:40.339 ++ read -r source fs size use avail _ mount 00:07:40.339 +++ df -T 00:07:40.339 +++ grep -v Filesystem 00:07:40.339 ++ mounts["$mount"]=tmpfs 00:07:40.339 ++ fss["$mount"]=tmpfs 00:07:40.339 ++ avails["$mount"]=1252601856 00:07:40.339 ++ sizes["$mount"]=1253683200 00:07:40.339 ++ uses["$mount"]=1081344 00:07:40.339 ++ read -r source fs size use avail _ mount 00:07:40.339 ++ mounts["$mount"]=/dev/vda1 00:07:40.339 ++ fss["$mount"]=ext4 00:07:40.339 ++ avails["$mount"]=10466746368 00:07:40.339 ++ sizes["$mount"]=20616794112 00:07:40.339 ++ uses["$mount"]=10133270528 00:07:40.340 ++ read -r source fs size use avail _ mount 00:07:40.340 ++ mounts["$mount"]=tmpfs 00:07:40.340 ++ fss["$mount"]=tmpfs 00:07:40.340 ++ avails["$mount"]=6268403712 00:07:40.340 ++ sizes["$mount"]=6268403712 00:07:40.340 ++ uses["$mount"]=0 00:07:40.340 ++ read -r source fs size use avail _ mount 00:07:40.340 ++ mounts["$mount"]=tmpfs 00:07:40.340 ++ fss["$mount"]=tmpfs 00:07:40.340 ++ avails["$mount"]=5242880 00:07:40.340 ++ sizes["$mount"]=5242880 00:07:40.340 ++ uses["$mount"]=0 00:07:40.340 ++ read -r source fs size use avail _ mount 00:07:40.340 ++ mounts["$mount"]=/dev/vda15 00:07:40.340 ++ fss["$mount"]=vfat 00:07:40.340 ++ avails["$mount"]=103061504 00:07:40.340 ++ sizes["$mount"]=109395968 00:07:40.340 ++ uses["$mount"]=6334464 00:07:40.340 ++ read -r source fs size use avail _ mount 00:07:40.340 ++ mounts["$mount"]=tmpfs 00:07:40.340 ++ fss["$mount"]=tmpfs 00:07:40.340 ++ avails["$mount"]=1253675008 00:07:40.340 ++ sizes["$mount"]=1253679104 00:07:40.340 ++ uses["$mount"]=4096 00:07:40.340 ++ read -r source fs size use avail _ mount 00:07:40.340 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:07:40.340 ++ fss["$mount"]=fuse.sshfs 00:07:40.340 ++ avails["$mount"]=92651098112 00:07:40.340 ++ sizes["$mount"]=105088212992 00:07:40.340 ++ uses["$mount"]=7051681792 00:07:40.340 ++ read -r source fs size use avail _ mount 00:07:40.340 ++ printf '* Looking for test storage...\n' 00:07:40.340 * Looking for test storage... 00:07:40.340 ++ local target_space new_size 00:07:40.340 ++ for target_dir in "${storage_candidates[@]}" 00:07:40.340 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:07:40.340 +++ awk '$1 !~ /Filesystem/{print $6}' 00:07:40.340 ++ mount=/ 00:07:40.340 ++ target_space=10466746368 00:07:40.340 ++ (( target_space == 0 || target_space < requested_size )) 00:07:40.340 ++ (( target_space >= requested_size )) 00:07:40.340 ++ [[ ext4 == tmpfs ]] 00:07:40.340 ++ [[ ext4 == ramfs ]] 00:07:40.340 ++ [[ / == / ]] 00:07:40.340 ++ new_size=12347863040 00:07:40.340 ++ (( new_size * 100 / sizes[/] > 95 )) 00:07:40.340 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:07:40.340 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:07:40.340 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:07:40.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:07:40.340 ++ return 0 00:07:40.340 ++ set -o errtrace 00:07:40.340 ++ shopt -s extdebug 00:07:40.340 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:07:40.340 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:40.340 12:27:22 -- common/autotest_common.sh@1672 -- # true 00:07:40.340 12:27:22 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:40.340 12:27:22 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:07:40.340 12:27:22 -- common/autotest_common.sh@29 -- # exec 00:07:40.340 12:27:22 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:40.340 12:27:22 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:40.340 12:27:22 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:40.340 12:27:22 -- common/autotest_common.sh@18 -- # set -x 00:07:40.340 12:27:22 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:07:40.340 12:27:22 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:07:40.340 12:27:22 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:07:40.340 12:27:22 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:07:40.598 12:27:22 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:07:40.598 12:27:22 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:07:40.598 12:27:22 -- unit/unittest.sh@179 -- # hash lcov 00:07:40.598 12:27:22 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:40.598 12:27:22 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:40.598 12:27:22 -- unit/unittest.sh@180 -- # cov_avail=yes 00:07:40.598 12:27:22 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:07:40.598 12:27:22 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:07:40.598 12:27:22 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:40.598 12:27:22 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:40.598 12:27:22 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:07:40.598 --rc lcov_branch_coverage=1 00:07:40.598 --rc lcov_function_coverage=1 00:07:40.598 --rc genhtml_branch_coverage=1 00:07:40.598 --rc genhtml_function_coverage=1 00:07:40.598 --rc genhtml_legend=1 00:07:40.598 --rc geninfo_all_blocks=1 00:07:40.598 ' 00:07:40.598 12:27:22 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:07:40.598 --rc lcov_branch_coverage=1 00:07:40.598 --rc lcov_function_coverage=1 00:07:40.598 --rc genhtml_branch_coverage=1 00:07:40.598 --rc genhtml_function_coverage=1 00:07:40.598 --rc genhtml_legend=1 00:07:40.598 --rc geninfo_all_blocks=1 00:07:40.598 ' 00:07:40.598 12:27:22 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:07:40.598 --rc lcov_branch_coverage=1 00:07:40.598 --rc lcov_function_coverage=1 00:07:40.598 --rc genhtml_branch_coverage=1 00:07:40.598 --rc genhtml_function_coverage=1 00:07:40.598 --rc genhtml_legend=1 00:07:40.598 --rc geninfo_all_blocks=1 00:07:40.598 --no-external' 00:07:40.598 12:27:22 -- unit/unittest.sh@200 -- # LCOV='lcov 00:07:40.598 --rc lcov_branch_coverage=1 00:07:40.598 --rc lcov_function_coverage=1 00:07:40.598 --rc genhtml_branch_coverage=1 00:07:40.598 --rc genhtml_function_coverage=1 00:07:40.598 --rc genhtml_legend=1 00:07:40.598 --rc geninfo_all_blocks=1 00:07:40.598 --no-external' 00:07:40.598 12:27:22 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:07:58.683 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:07:58.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:07:58.683 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:07:58.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:07:58.683 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:07:58.683 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:08:25.295 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:08:25.295 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:08:25.295 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:08:25.295 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:08:25.295 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:08:25.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:08:25.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:08:25.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:08:25.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:08:25.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:08:25.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:08:25.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:08:25.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:08:25.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:08:25.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:08:25.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:08:25.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:08:26.076 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:08:26.076 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:08:26.076 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:08:26.076 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:08:28.628 12:28:10 -- unit/unittest.sh@206 -- # uname -m 00:08:28.628 12:28:10 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:08:28.628 12:28:10 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:08:28.628 12:28:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:28.628 12:28:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.628 12:28:10 -- common/autotest_common.sh@10 -- # set +x 00:08:28.628 ************************************ 00:08:28.628 START TEST unittest_pci_event 00:08:28.628 ************************************ 00:08:28.628 12:28:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:08:28.628 00:08:28.628 00:08:28.628 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.628 http://cunit.sourceforge.net/ 00:08:28.628 00:08:28.628 00:08:28.628 Suite: pci_event 00:08:28.628 Test: test_pci_parse_event ...[2024-10-01 12:28:10.598674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:08:28.628 [2024-10-01 12:28:10.599541] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:08:28.628 passed 00:08:28.628 00:08:28.628 Run Summary: Type Total Ran Passed Failed Inactive 00:08:28.628 suites 1 1 n/a 0 0 00:08:28.628 tests 1 1 1 0 0 00:08:28.628 asserts 15 15 15 0 n/a 00:08:28.628 00:08:28.628 Elapsed time = 0.001 seconds 00:08:28.628 00:08:28.628 real 0m0.055s 00:08:28.628 user 0m0.020s 00:08:28.628 sys 0m0.031s 00:08:28.628 12:28:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.628 12:28:10 -- common/autotest_common.sh@10 -- # set +x 00:08:28.628 ************************************ 00:08:28.628 END TEST unittest_pci_event 00:08:28.628 ************************************ 00:08:28.628 12:28:10 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:08:28.628 12:28:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:28.628 12:28:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.628 12:28:10 -- common/autotest_common.sh@10 -- # set +x 00:08:28.628 ************************************ 00:08:28.628 START TEST unittest_include 00:08:28.628 ************************************ 00:08:28.628 12:28:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:08:28.628 00:08:28.628 00:08:28.628 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.628 http://cunit.sourceforge.net/ 00:08:28.628 00:08:28.628 00:08:28.628 Suite: histogram 00:08:28.628 Test: histogram_test ...passed 00:08:28.628 Test: histogram_merge ...passed 00:08:28.628 00:08:28.628 Run Summary: Type Total Ran Passed Failed Inactive 00:08:28.628 suites 1 1 n/a 0 0 00:08:28.628 tests 2 2 2 0 0 00:08:28.628 asserts 50 50 50 0 n/a 00:08:28.628 00:08:28.628 Elapsed time = 0.007 seconds 00:08:28.628 00:08:28.628 real 0m0.054s 00:08:28.628 user 0m0.022s 00:08:28.628 sys 0m0.033s 00:08:28.628 12:28:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.628 12:28:10 -- common/autotest_common.sh@10 -- # set +x 00:08:28.628 ************************************ 00:08:28.628 END TEST unittest_include 00:08:28.628 ************************************ 00:08:28.629 12:28:10 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:08:28.629 12:28:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:28.629 12:28:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.629 12:28:10 -- common/autotest_common.sh@10 -- # set +x 00:08:28.629 ************************************ 00:08:28.629 START TEST unittest_bdev 00:08:28.629 ************************************ 00:08:28.629 12:28:10 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:08:28.629 12:28:10 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:08:28.629 00:08:28.629 00:08:28.629 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.629 http://cunit.sourceforge.net/ 00:08:28.629 00:08:28.629 00:08:28.629 Suite: bdev 00:08:28.629 Test: bytes_to_blocks_test ...passed 00:08:28.629 Test: num_blocks_test ...passed 00:08:28.629 Test: io_valid_test ...passed 00:08:28.629 Test: open_write_test ...[2024-10-01 12:28:10.937301] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:08:28.629 [2024-10-01 12:28:10.937569] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:08:28.629 [2024-10-01 12:28:10.937658] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:08:28.629 passed 00:08:28.629 Test: claim_test ...passed 00:08:28.629 Test: alias_add_del_test ...[2024-10-01 12:28:11.030704] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:08:28.629 [2024-10-01 12:28:11.030829] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:08:28.629 [2024-10-01 12:28:11.030859] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:08:28.629 passed 00:08:28.629 Test: get_device_stat_test ...passed 00:08:28.629 Test: bdev_io_types_test ...passed 00:08:28.629 Test: bdev_io_wait_test ...passed 00:08:28.629 Test: bdev_io_spans_split_test ...passed 00:08:28.887 Test: bdev_io_boundary_split_test ...passed 00:08:28.887 Test: bdev_io_max_size_and_segment_split_test ...[2024-10-01 12:28:11.217841] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:08:28.887 passed 00:08:28.887 Test: bdev_io_mix_split_test ...passed 00:08:28.887 Test: bdev_io_split_with_io_wait ...passed 00:08:28.887 Test: bdev_io_write_unit_split_test ...[2024-10-01 12:28:11.354876] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:08:28.887 [2024-10-01 12:28:11.354976] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:08:28.887 [2024-10-01 12:28:11.355010] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:08:28.887 [2024-10-01 12:28:11.355043] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:08:28.887 passed 00:08:29.145 Test: bdev_io_alignment_with_boundary ...passed 00:08:29.145 Test: bdev_io_alignment ...passed 00:08:29.145 Test: bdev_histograms ...passed 00:08:29.145 Test: bdev_write_zeroes ...passed 00:08:29.145 Test: bdev_compare_and_write ...passed 00:08:29.404 Test: bdev_compare ...passed 00:08:29.404 Test: bdev_compare_emulated ...passed 00:08:29.404 Test: bdev_zcopy_write ...passed 00:08:29.404 Test: bdev_zcopy_read ...passed 00:08:29.404 Test: bdev_open_while_hotremove ...passed 00:08:29.404 Test: bdev_close_while_hotremove ...passed 00:08:29.404 Test: bdev_open_ext_test ...passed 00:08:29.404 Test: bdev_open_ext_unregister ...[2024-10-01 12:28:11.895641] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:08:29.404 [2024-10-01 12:28:11.895821] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:08:29.404 passed 00:08:29.662 Test: bdev_set_io_timeout ...passed 00:08:29.662 Test: bdev_set_qd_sampling ...passed 00:08:29.662 Test: lba_range_overlap ...passed 00:08:29.662 Test: lock_lba_range_check_ranges ...passed 00:08:29.662 Test: lock_lba_range_with_io_outstanding ...passed 00:08:29.662 Test: lock_lba_range_overlapped ...passed 00:08:29.662 Test: bdev_quiesce ...[2024-10-01 12:28:12.141604] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:08:29.662 passed 00:08:29.920 Test: bdev_io_abort ...passed 00:08:29.920 Test: bdev_unmap ...passed 00:08:29.920 Test: bdev_write_zeroes_split_test ...passed 00:08:29.920 Test: bdev_set_options_test ...passed 00:08:29.920 Test: bdev_get_memory_domains ...passed 00:08:29.920 Test: bdev_io_ext ...[2024-10-01 12:28:12.302662] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:08:29.920 passed 00:08:29.920 Test: bdev_io_ext_no_opts ...passed 00:08:29.920 Test: bdev_io_ext_invalid_opts ...passed 00:08:30.179 Test: bdev_io_ext_split ...passed 00:08:30.179 Test: bdev_io_ext_bounce_buffer ...passed 00:08:30.179 Test: bdev_register_uuid_alias ...[2024-10-01 12:28:12.544990] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 47a5d3cf-bc27-4fe5-b7c8-4cf604809de3 already exists 00:08:30.179 [2024-10-01 12:28:12.545068] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:47a5d3cf-bc27-4fe5-b7c8-4cf604809de3 alias for bdev bdev0 00:08:30.179 passed 00:08:30.179 Test: bdev_unregister_by_name ...[2024-10-01 12:28:12.568456] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:08:30.179 passed 00:08:30.179 Test: for_each_bdev_test ...[2024-10-01 12:28:12.568516] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:08:30.179 passed 00:08:30.179 Test: bdev_seek_test ...passed 00:08:30.179 Test: bdev_copy ...passed 00:08:30.179 Test: bdev_copy_split_test ...passed 00:08:30.179 Test: examine_locks ...passed 00:08:30.179 Test: claim_v2_rwo ...[2024-10-01 12:28:12.704592] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:08:30.179 passed 00:08:30.179 Test: claim_v2_rom ...[2024-10-01 12:28:12.704673] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.704695] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.704750] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.704764] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.704809] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:08:30.179 [2024-10-01 12:28:12.704922] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:08:30.179 passed 00:08:30.179 Test: claim_v2_rwm ...[2024-10-01 12:28:12.704962] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.704985] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.705023] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.705063] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:08:30.179 [2024-10-01 12:28:12.705097] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:08:30.179 [2024-10-01 12:28:12.705187] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:08:30.179 [2024-10-01 12:28:12.705243] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.705273] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.705297] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.705314] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.705342] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:08:30.179 [2024-10-01 12:28:12.705372] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:08:30.179 passed 00:08:30.179 Test: claim_v2_existing_writer ...passed 00:08:30.179 Test: claim_v2_existing_v1 ...passed 00:08:30.179 Test: claim_v1_existing_v2 ...[2024-10-01 12:28:12.705483] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:08:30.179 [2024-10-01 12:28:12.705509] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:08:30.179 [2024-10-01 12:28:12.705601] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:08:30.180 [2024-10-01 12:28:12.705630] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:08:30.180 [2024-10-01 12:28:12.705647] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:08:30.180 [2024-10-01 12:28:12.705761] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:08:30.180 [2024-10-01 12:28:12.705803] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:08:30.180 [2024-10-01 12:28:12.705831] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:08:30.180 passed 00:08:30.180 Test: examine_claimed ...passed 00:08:30.180 00:08:30.180 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.180 suites 1 1 n/a 0 0 00:08:30.180 tests 59 59 59 0 0 00:08:30.180 asserts 4599 4599 4599 0 n/a 00:08:30.180 00:08:30.180 Elapsed time = 1.838 seconds 00:08:30.180 [2024-10-01 12:28:12.706043] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:08:30.438 12:28:12 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:08:30.438 00:08:30.438 00:08:30.438 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.438 http://cunit.sourceforge.net/ 00:08:30.438 00:08:30.438 00:08:30.438 Suite: nvme 00:08:30.438 Test: test_create_ctrlr ...passed 00:08:30.438 Test: test_reset_ctrlr ...[2024-10-01 12:28:12.773015] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.438 passed 00:08:30.438 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:08:30.438 Test: test_failover_ctrlr ...passed 00:08:30.438 Test: test_race_between_failover_and_add_secondary_trid ...[2024-10-01 12:28:12.775543] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.438 [2024-10-01 12:28:12.775785] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.438 passed 00:08:30.438 Test: test_pending_reset ...[2024-10-01 12:28:12.775961] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.438 [2024-10-01 12:28:12.777308] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.438 [2024-10-01 12:28:12.777536] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.438 passed 00:08:30.438 Test: test_attach_ctrlr ...[2024-10-01 12:28:12.778576] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:08:30.438 passed 00:08:30.438 Test: test_aer_cb ...passed 00:08:30.438 Test: test_submit_nvme_cmd ...passed 00:08:30.438 Test: test_add_remove_trid ...passed 00:08:30.438 Test: test_abort ...[2024-10-01 12:28:12.781270] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:08:30.438 passed 00:08:30.438 Test: test_get_io_qpair ...passed 00:08:30.438 Test: test_bdev_unregister ...passed 00:08:30.438 Test: test_compare_ns ...passed 00:08:30.438 Test: test_init_ana_log_page ...passed 00:08:30.438 Test: test_get_memory_domains ...passed 00:08:30.438 Test: test_reconnect_qpair ...[2024-10-01 12:28:12.783186] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.438 passed 00:08:30.438 Test: test_create_bdev_ctrlr ...[2024-10-01 12:28:12.783579] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:08:30.438 passed 00:08:30.438 Test: test_add_multi_ns_to_bdev ...[2024-10-01 12:28:12.784538] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:08:30.438 passed 00:08:30.438 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:08:30.438 Test: test_admin_path ...passed 00:08:30.438 Test: test_reset_bdev_ctrlr ...passed 00:08:30.438 Test: test_find_io_path ...passed 00:08:30.438 Test: test_retry_io_if_ana_state_is_updating ...passed 00:08:30.438 Test: test_retry_io_for_io_path_error ...passed 00:08:30.438 Test: test_retry_io_count ...passed 00:08:30.438 Test: test_concurrent_read_ana_log_page ...passed 00:08:30.438 Test: test_retry_io_for_ana_error ...passed 00:08:30.438 Test: test_check_io_error_resiliency_params ...[2024-10-01 12:28:12.790124] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:08:30.438 [2024-10-01 12:28:12.790176] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:08:30.438 [2024-10-01 12:28:12.790200] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:08:30.438 [2024-10-01 12:28:12.790222] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:08:30.438 [2024-10-01 12:28:12.790244] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:08:30.438 [2024-10-01 12:28:12.790267] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:08:30.438 [2024-10-01 12:28:12.790288] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:08:30.439 [2024-10-01 12:28:12.790326] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:08:30.439 passed 00:08:30.439 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-10-01 12:28:12.790347] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:08:30.439 passed 00:08:30.439 Test: test_reconnect_ctrlr ...[2024-10-01 12:28:12.790951] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 [2024-10-01 12:28:12.791042] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 [2024-10-01 12:28:12.791261] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 [2024-10-01 12:28:12.791355] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 [2024-10-01 12:28:12.791456] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 passed 00:08:30.439 Test: test_retry_failover_ctrlr ...[2024-10-01 12:28:12.791677] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 passed 00:08:30.439 Test: test_fail_path ...[2024-10-01 12:28:12.792115] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 [2024-10-01 12:28:12.792223] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 [2024-10-01 12:28:12.792315] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 [2024-10-01 12:28:12.792396] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 [2024-10-01 12:28:12.792485] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 passed 00:08:30.439 Test: test_nvme_ns_cmp ...passed 00:08:30.439 Test: test_ana_transition ...passed 00:08:30.439 Test: test_set_preferred_path ...passed 00:08:30.439 Test: test_find_next_io_path ...passed 00:08:30.439 Test: test_find_io_path_min_qd ...passed 00:08:30.439 Test: test_disable_auto_failback ...[2024-10-01 12:28:12.793642] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 passed 00:08:30.439 Test: test_set_multipath_policy ...passed 00:08:30.439 Test: test_uuid_generation ...passed 00:08:30.439 Test: test_retry_io_to_same_path ...passed 00:08:30.439 Test: test_race_between_reset_and_disconnected ...passed 00:08:30.439 Test: test_ctrlr_op_rpc ...passed 00:08:30.439 Test: test_bdev_ctrlr_op_rpc ...passed 00:08:30.439 Test: test_disable_enable_ctrlr ...[2024-10-01 12:28:12.796165] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 passed 00:08:30.439 Test: test_delete_ctrlr_done ...[2024-10-01 12:28:12.796306] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:30.439 passed 00:08:30.439 Test: test_ns_remove_during_reset ...passed 00:08:30.439 00:08:30.439 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.439 suites 1 1 n/a 0 0 00:08:30.439 tests 48 48 48 0 0 00:08:30.439 asserts 3553 3553 3553 0 n/a 00:08:30.439 00:08:30.439 Elapsed time = 0.025 seconds 00:08:30.439 12:28:12 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:08:30.439 Test Options 00:08:30.439 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:08:30.439 00:08:30.439 00:08:30.439 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.439 http://cunit.sourceforge.net/ 00:08:30.439 00:08:30.439 00:08:30.439 Suite: raid 00:08:30.439 Test: test_create_raid ...passed 00:08:30.439 Test: test_create_raid_superblock ...passed 00:08:30.439 Test: test_delete_raid ...passed 00:08:30.439 Test: test_create_raid_invalid_args ...[2024-10-01 12:28:12.857017] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:08:30.439 [2024-10-01 12:28:12.857527] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:08:30.439 [2024-10-01 12:28:12.858038] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:08:30.439 [2024-10-01 12:28:12.858305] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:08:30.439 [2024-10-01 12:28:12.859179] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:08:30.439 passed 00:08:30.439 Test: test_delete_raid_invalid_args ...passed 00:08:30.439 Test: test_io_channel ...passed 00:08:30.439 Test: test_reset_io ...passed 00:08:30.439 Test: test_write_io ...passed 00:08:30.439 Test: test_read_io ...passed 00:08:31.818 Test: test_unmap_io ...passed 00:08:31.818 Test: test_io_failure ...[2024-10-01 12:28:13.968450] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:08:31.818 passed 00:08:31.818 Test: test_multi_raid_no_io ...passed 00:08:31.818 Test: test_multi_raid_with_io ...passed 00:08:31.818 Test: test_io_type_supported ...passed 00:08:31.818 Test: test_raid_json_dump_info ...passed 00:08:31.818 Test: test_context_size ...passed 00:08:31.818 Test: test_raid_level_conversions ...passed 00:08:31.818 Test: test_raid_process ...passed 00:08:31.818 Test: test_raid_io_split ...passed 00:08:31.818 00:08:31.818 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.818 suites 1 1 n/a 0 0 00:08:31.818 tests 19 19 19 0 0 00:08:31.818 asserts 177879 177879 177879 0 n/a 00:08:31.818 00:08:31.818 Elapsed time = 1.123 seconds 00:08:31.818 12:28:14 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:08:31.818 00:08:31.818 00:08:31.818 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.818 http://cunit.sourceforge.net/ 00:08:31.818 00:08:31.818 00:08:31.818 Suite: raid_sb 00:08:31.818 Test: test_raid_bdev_write_superblock ...passed 00:08:31.818 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:08:31.818 Test: test_raid_bdev_parse_superblock ...[2024-10-01 12:28:14.031594] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:08:31.818 passed 00:08:31.818 00:08:31.818 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.818 suites 1 1 n/a 0 0 00:08:31.818 tests 3 3 3 0 0 00:08:31.818 asserts 32 32 32 0 n/a 00:08:31.818 00:08:31.818 Elapsed time = 0.001 seconds 00:08:31.818 12:28:14 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:08:31.818 00:08:31.818 00:08:31.818 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.818 http://cunit.sourceforge.net/ 00:08:31.818 00:08:31.818 00:08:31.818 Suite: concat 00:08:31.818 Test: test_concat_start ...passed 00:08:31.818 Test: test_concat_rw ...passed 00:08:31.818 Test: test_concat_null_payload ...passed 00:08:31.818 00:08:31.818 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.818 suites 1 1 n/a 0 0 00:08:31.818 tests 3 3 3 0 0 00:08:31.818 asserts 8097 8097 8097 0 n/a 00:08:31.818 00:08:31.818 Elapsed time = 0.007 seconds 00:08:31.818 12:28:14 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:08:31.818 00:08:31.818 00:08:31.818 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.818 http://cunit.sourceforge.net/ 00:08:31.818 00:08:31.818 00:08:31.818 Suite: raid1 00:08:31.818 Test: test_raid1_start ...passed 00:08:31.818 Test: test_raid1_read_balancing ...passed 00:08:31.818 00:08:31.818 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.818 suites 1 1 n/a 0 0 00:08:31.818 tests 2 2 2 0 0 00:08:31.818 asserts 2856 2856 2856 0 n/a 00:08:31.818 00:08:31.818 Elapsed time = 0.005 seconds 00:08:31.818 12:28:14 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:08:31.818 00:08:31.818 00:08:31.818 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.818 http://cunit.sourceforge.net/ 00:08:31.818 00:08:31.818 00:08:31.818 Suite: zone 00:08:31.818 Test: test_zone_get_operation ...passed 00:08:31.818 Test: test_bdev_zone_get_info ...passed 00:08:31.818 Test: test_bdev_zone_management ...passed 00:08:31.818 Test: test_bdev_zone_append ...passed 00:08:31.818 Test: test_bdev_zone_append_with_md ...passed 00:08:31.818 Test: test_bdev_zone_appendv ...passed 00:08:31.818 Test: test_bdev_zone_appendv_with_md ...passed 00:08:31.818 Test: test_bdev_io_get_append_location ...passed 00:08:31.818 00:08:31.818 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.818 suites 1 1 n/a 0 0 00:08:31.818 tests 8 8 8 0 0 00:08:31.818 asserts 94 94 94 0 n/a 00:08:31.818 00:08:31.818 Elapsed time = 0.001 seconds 00:08:31.818 12:28:14 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:08:31.818 00:08:31.818 00:08:31.818 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.818 http://cunit.sourceforge.net/ 00:08:31.818 00:08:31.818 00:08:31.818 Suite: gpt_parse 00:08:31.818 Test: test_parse_mbr_and_primary ...[2024-10-01 12:28:14.244384] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:08:31.818 [2024-10-01 12:28:14.244890] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:08:31.818 [2024-10-01 12:28:14.244979] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:08:31.818 [2024-10-01 12:28:14.245117] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:08:31.818 [2024-10-01 12:28:14.245188] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:08:31.818 [2024-10-01 12:28:14.245331] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:08:31.818 passed 00:08:31.818 Test: test_parse_secondary ...[2024-10-01 12:28:14.246335] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:08:31.818 [2024-10-01 12:28:14.246413] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:08:31.818 [2024-10-01 12:28:14.246478] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:08:31.818 [2024-10-01 12:28:14.246543] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:08:31.818 passed 00:08:31.818 Test: test_check_mbr ...[2024-10-01 12:28:14.247560] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:08:31.818 passed 00:08:31.818 Test: test_read_header ...[2024-10-01 12:28:14.247652] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:08:31.818 [2024-10-01 12:28:14.247747] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:08:31.818 [2024-10-01 12:28:14.247925] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:08:31.818 [2024-10-01 12:28:14.248056] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:08:31.818 [2024-10-01 12:28:14.248138] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:08:31.818 [2024-10-01 12:28:14.248209] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:08:31.818 [2024-10-01 12:28:14.248277] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:08:31.818 passed 00:08:31.818 Test: test_read_partitions ...[2024-10-01 12:28:14.248378] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:08:31.818 [2024-10-01 12:28:14.248468] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:08:31.818 [2024-10-01 12:28:14.248533] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:08:31.818 [2024-10-01 12:28:14.248591] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:08:31.819 [2024-10-01 12:28:14.249103] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:08:31.819 passed 00:08:31.819 00:08:31.819 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.819 suites 1 1 n/a 0 0 00:08:31.819 tests 5 5 5 0 0 00:08:31.819 asserts 33 33 33 0 n/a 00:08:31.819 00:08:31.819 Elapsed time = 0.006 seconds 00:08:31.819 12:28:14 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:08:31.819 00:08:31.819 00:08:31.819 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.819 http://cunit.sourceforge.net/ 00:08:31.819 00:08:31.819 00:08:31.819 Suite: bdev_part 00:08:31.819 Test: part_test ...[2024-10-01 12:28:14.302686] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:08:31.819 passed 00:08:31.819 Test: part_free_test ...passed 00:08:32.080 Test: part_get_io_channel_test ...passed 00:08:32.080 Test: part_construct_ext ...passed 00:08:32.080 00:08:32.080 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.080 suites 1 1 n/a 0 0 00:08:32.080 tests 4 4 4 0 0 00:08:32.080 asserts 48 48 48 0 n/a 00:08:32.080 00:08:32.080 Elapsed time = 0.046 seconds 00:08:32.080 12:28:14 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:08:32.080 00:08:32.080 00:08:32.080 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.080 http://cunit.sourceforge.net/ 00:08:32.080 00:08:32.080 00:08:32.080 Suite: scsi_nvme_suite 00:08:32.080 Test: scsi_nvme_translate_test ...passed 00:08:32.080 00:08:32.080 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.080 suites 1 1 n/a 0 0 00:08:32.080 tests 1 1 1 0 0 00:08:32.080 asserts 104 104 104 0 n/a 00:08:32.080 00:08:32.080 Elapsed time = 0.000 seconds 00:08:32.080 12:28:14 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:08:32.080 00:08:32.080 00:08:32.080 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.080 http://cunit.sourceforge.net/ 00:08:32.080 00:08:32.080 00:08:32.080 Suite: lvol 00:08:32.080 Test: ut_lvs_init ...[2024-10-01 12:28:14.456912] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:08:32.080 [2024-10-01 12:28:14.457499] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:08:32.080 passed 00:08:32.080 Test: ut_lvol_init ...passed 00:08:32.080 Test: ut_lvol_snapshot ...passed 00:08:32.080 Test: ut_lvol_clone ...passed 00:08:32.080 Test: ut_lvs_destroy ...passed 00:08:32.080 Test: ut_lvs_unload ...passed 00:08:32.080 Test: ut_lvol_resize ...[2024-10-01 12:28:14.459652] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:08:32.080 passed 00:08:32.080 Test: ut_lvol_set_read_only ...passed 00:08:32.080 Test: ut_lvol_hotremove ...passed 00:08:32.080 Test: ut_vbdev_lvol_get_io_channel ...passed 00:08:32.080 Test: ut_vbdev_lvol_io_type_supported ...passed 00:08:32.080 Test: ut_lvol_read_write ...passed 00:08:32.080 Test: ut_vbdev_lvol_submit_request ...passed 00:08:32.080 Test: ut_lvol_examine_config ...passed 00:08:32.080 Test: ut_lvol_examine_disk ...[2024-10-01 12:28:14.460682] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:08:32.080 passed 00:08:32.080 Test: ut_lvol_rename ...[2024-10-01 12:28:14.461850] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:08:32.080 [2024-10-01 12:28:14.461953] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:08:32.080 passed 00:08:32.080 Test: ut_bdev_finish ...passed 00:08:32.081 Test: ut_lvs_rename ...passed 00:08:32.081 Test: ut_lvol_seek ...passed 00:08:32.081 Test: ut_esnap_dev_create ...[2024-10-01 12:28:14.462650] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:08:32.081 [2024-10-01 12:28:14.462728] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:08:32.081 [2024-10-01 12:28:14.462764] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:08:32.081 [2024-10-01 12:28:14.462812] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:08:32.081 passed 00:08:32.081 Test: ut_lvol_esnap_clone_bad_args ...[2024-10-01 12:28:14.462946] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:08:32.081 [2024-10-01 12:28:14.462981] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:08:32.081 passed 00:08:32.081 00:08:32.081 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.081 suites 1 1 n/a 0 0 00:08:32.081 tests 21 21 21 0 0 00:08:32.081 asserts 712 712 712 0 n/a 00:08:32.081 00:08:32.081 Elapsed time = 0.007 seconds 00:08:32.081 12:28:14 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:08:32.081 00:08:32.081 00:08:32.081 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.081 http://cunit.sourceforge.net/ 00:08:32.081 00:08:32.081 00:08:32.081 Suite: zone_block 00:08:32.081 Test: test_zone_block_create ...passed 00:08:32.081 Test: test_zone_block_create_invalid ...[2024-10-01 12:28:14.546673] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:08:32.081 [2024-10-01 12:28:14.547067] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-10-01 12:28:14.547250] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:08:32.081 [2024-10-01 12:28:14.547303] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-10-01 12:28:14.547465] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:08:32.081 [2024-10-01 12:28:14.547497] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-10-01 12:28:14.547592] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:08:32.081 [2024-10-01 12:28:14.547644] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:08:32.081 Test: test_get_zone_info ...[2024-10-01 12:28:14.548205] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.548273] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.548332] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 passed 00:08:32.081 Test: test_supported_io_types ...passed 00:08:32.081 Test: test_reset_zone ...[2024-10-01 12:28:14.549169] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.549224] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 passed 00:08:32.081 Test: test_open_zone ...[2024-10-01 12:28:14.549632] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.550205] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.550264] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 passed 00:08:32.081 Test: test_zone_write ...[2024-10-01 12:28:14.550710] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:08:32.081 [2024-10-01 12:28:14.550764] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.550823] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:08:32.081 [2024-10-01 12:28:14.550864] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.556462] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:08:32.081 [2024-10-01 12:28:14.556511] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.556577] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:08:32.081 [2024-10-01 12:28:14.556607] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.562321] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:08:32.081 [2024-10-01 12:28:14.562375] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 passed 00:08:32.081 Test: test_zone_read ...[2024-10-01 12:28:14.562849] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:08:32.081 [2024-10-01 12:28:14.562887] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.562963] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:08:32.081 [2024-10-01 12:28:14.563004] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.563429] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:08:32.081 [2024-10-01 12:28:14.563476] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 passed 00:08:32.081 Test: test_close_zone ...[2024-10-01 12:28:14.563835] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.563927] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.564132] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 passed 00:08:32.081 Test: test_finish_zone ...[2024-10-01 12:28:14.564173] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.564769] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.564815] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 passed 00:08:32.081 Test: test_append_zone ...[2024-10-01 12:28:14.565147] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:08:32.081 [2024-10-01 12:28:14.565186] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.565244] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:08:32.081 [2024-10-01 12:28:14.565273] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 [2024-10-01 12:28:14.576302] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:08:32.081 [2024-10-01 12:28:14.576359] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:08:32.081 passed 00:08:32.081 00:08:32.081 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.081 suites 1 1 n/a 0 0 00:08:32.081 tests 11 11 11 0 0 00:08:32.081 asserts 3437 3437 3437 0 n/a 00:08:32.081 00:08:32.081 Elapsed time = 0.031 seconds 00:08:32.341 12:28:14 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:08:32.341 00:08:32.341 00:08:32.341 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.341 http://cunit.sourceforge.net/ 00:08:32.341 00:08:32.341 00:08:32.341 Suite: bdev 00:08:32.341 Test: basic ...[2024-10-01 12:28:14.702031] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x559bee10b401): Operation not permitted (rc=-1) 00:08:32.341 [2024-10-01 12:28:14.702337] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x559bee10b3c0): Operation not permitted (rc=-1) 00:08:32.341 [2024-10-01 12:28:14.702378] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x559bee10b401): Operation not permitted (rc=-1) 00:08:32.341 passed 00:08:32.342 Test: unregister_and_close ...passed 00:08:32.342 Test: unregister_and_close_different_threads ...passed 00:08:32.342 Test: basic_qos ...passed 00:08:32.599 Test: put_channel_during_reset ...passed 00:08:32.600 Test: aborted_reset ...passed 00:08:32.600 Test: aborted_reset_no_outstanding_io ...passed 00:08:32.600 Test: io_during_reset ...passed 00:08:32.600 Test: reset_completions ...passed 00:08:32.858 Test: io_during_qos_queue ...passed 00:08:32.858 Test: io_during_qos_reset ...passed 00:08:32.858 Test: enomem ...passed 00:08:32.858 Test: enomem_multi_bdev ...passed 00:08:32.858 Test: enomem_multi_bdev_unregister ...passed 00:08:32.858 Test: enomem_multi_io_target ...passed 00:08:33.117 Test: qos_dynamic_enable ...passed 00:08:33.117 Test: bdev_histograms_mt ...passed 00:08:33.117 Test: bdev_set_io_timeout_mt ...[2024-10-01 12:28:15.502823] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:08:33.117 passed 00:08:33.117 Test: lock_lba_range_then_submit_io ...[2024-10-01 12:28:15.523034] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x559bee10b380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:08:33.117 passed 00:08:33.117 Test: unregister_during_reset ...passed 00:08:33.117 Test: event_notify_and_close ...passed 00:08:33.376 Test: unregister_and_qos_poller ...passed 00:08:33.376 Suite: bdev_wrong_thread 00:08:33.376 Test: spdk_bdev_register_wt ...[2024-10-01 12:28:15.681776] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:08:33.376 passed 00:08:33.376 Test: spdk_bdev_examine_wt ...[2024-10-01 12:28:15.682068] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:08:33.376 passed 00:08:33.376 00:08:33.376 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.376 suites 2 2 n/a 0 0 00:08:33.376 tests 24 24 24 0 0 00:08:33.376 asserts 621 621 621 0 n/a 00:08:33.376 00:08:33.376 Elapsed time = 1.008 seconds 00:08:33.376 00:08:33.376 real 0m4.883s 00:08:33.376 user 0m2.117s 00:08:33.376 sys 0m2.774s 00:08:33.376 12:28:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.376 12:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.376 ************************************ 00:08:33.376 END TEST unittest_bdev 00:08:33.376 ************************************ 00:08:33.376 12:28:15 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:33.376 12:28:15 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:33.376 12:28:15 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:33.376 12:28:15 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:33.376 12:28:15 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:08:33.376 12:28:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:33.376 12:28:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.376 12:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.376 ************************************ 00:08:33.376 START TEST unittest_bdev_raid5f 00:08:33.376 ************************************ 00:08:33.376 12:28:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:08:33.376 00:08:33.376 00:08:33.376 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.376 http://cunit.sourceforge.net/ 00:08:33.376 00:08:33.376 00:08:33.376 Suite: raid5f 00:08:33.376 Test: test_raid5f_start ...passed 00:08:33.945 Test: test_raid5f_submit_read_request ...passed 00:08:34.204 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:08:37.490 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:08:55.644 Test: test_raid5f_chunk_write_error ...passed 00:09:02.207 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:09:04.111 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:09:30.714 Test: test_raid5f_submit_read_request_degraded ...passed 00:09:30.714 00:09:30.714 Run Summary: Type Total Ran Passed Failed Inactive 00:09:30.714 suites 1 1 n/a 0 0 00:09:30.714 tests 8 8 8 0 0 00:09:30.714 asserts 351864 351864 351864 0 n/a 00:09:30.714 00:09:30.714 Elapsed time = 54.423 seconds 00:09:30.714 00:09:30.714 real 0m54.582s 00:09:30.714 user 0m50.931s 00:09:30.714 sys 0m3.600s 00:09:30.714 12:29:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.714 12:29:10 -- common/autotest_common.sh@10 -- # set +x 00:09:30.714 ************************************ 00:09:30.714 END TEST unittest_bdev_raid5f 00:09:30.714 ************************************ 00:09:30.714 12:29:10 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:09:30.714 12:29:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:30.714 12:29:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:30.714 12:29:10 -- common/autotest_common.sh@10 -- # set +x 00:09:30.714 ************************************ 00:09:30.714 START TEST unittest_blob_blobfs 00:09:30.714 ************************************ 00:09:30.714 12:29:10 -- common/autotest_common.sh@1104 -- # unittest_blob 00:09:30.714 12:29:10 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:09:30.714 12:29:10 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:09:30.714 00:09:30.714 00:09:30.714 CUnit - A unit testing framework for C - Version 2.1-3 00:09:30.714 http://cunit.sourceforge.net/ 00:09:30.714 00:09:30.714 00:09:30.714 Suite: blob_nocopy_noextent 00:09:30.714 Test: blob_init ...[2024-10-01 12:29:10.529744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:30.714 passed 00:09:30.714 Test: blob_thin_provision ...passed 00:09:30.714 Test: blob_read_only ...passed 00:09:30.714 Test: bs_load ...[2024-10-01 12:29:10.652715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:30.714 passed 00:09:30.714 Test: bs_load_custom_cluster_size ...passed 00:09:30.714 Test: bs_load_after_failed_grow ...passed 00:09:30.714 Test: bs_cluster_sz ...[2024-10-01 12:29:10.696916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:30.714 [2024-10-01 12:29:10.697351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:30.714 [2024-10-01 12:29:10.697526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:30.714 passed 00:09:30.714 Test: bs_resize_md ...passed 00:09:30.714 Test: bs_destroy ...passed 00:09:30.714 Test: bs_type ...passed 00:09:30.714 Test: bs_super_block ...passed 00:09:30.714 Test: bs_test_recover_cluster_count ...passed 00:09:30.714 Test: bs_grow_live ...passed 00:09:30.714 Test: bs_grow_live_no_space ...passed 00:09:30.714 Test: bs_test_grow ...passed 00:09:30.714 Test: blob_serialize_test ...passed 00:09:30.714 Test: super_block_crc ...passed 00:09:30.714 Test: blob_thin_prov_write_count_io ...passed 00:09:30.714 Test: bs_load_iter_test ...passed 00:09:30.714 Test: blob_relations ...[2024-10-01 12:29:10.947804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:30.714 [2024-10-01 12:29:10.947997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 [2024-10-01 12:29:10.948930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:30.714 [2024-10-01 12:29:10.948997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 passed 00:09:30.714 Test: blob_relations2 ...[2024-10-01 12:29:10.970410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:30.714 [2024-10-01 12:29:10.970505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 [2024-10-01 12:29:10.970556] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:30.714 [2024-10-01 12:29:10.970577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 [2024-10-01 12:29:10.971947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:30.714 [2024-10-01 12:29:10.972010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 [2024-10-01 12:29:10.972402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:30.714 [2024-10-01 12:29:10.972467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 passed 00:09:30.714 Test: blob_relations3 ...passed 00:09:30.714 Test: blobstore_clean_power_failure ...passed 00:09:30.714 Test: blob_delete_snapshot_power_failure ...[2024-10-01 12:29:11.230321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:30.714 [2024-10-01 12:29:11.250257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:30.714 [2024-10-01 12:29:11.250375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:30.714 [2024-10-01 12:29:11.250431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 [2024-10-01 12:29:11.269643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:30.714 [2024-10-01 12:29:11.269761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:30.714 [2024-10-01 12:29:11.269816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:30.714 [2024-10-01 12:29:11.269860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 [2024-10-01 12:29:11.288991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:30.714 [2024-10-01 12:29:11.289153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 [2024-10-01 12:29:11.308315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:30.714 [2024-10-01 12:29:11.308475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 [2024-10-01 12:29:11.327791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:30.714 [2024-10-01 12:29:11.327934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:30.714 passed 00:09:30.714 Test: blob_create_snapshot_power_failure ...[2024-10-01 12:29:11.384834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:30.714 [2024-10-01 12:29:11.422964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:30.714 [2024-10-01 12:29:11.442304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:30.714 passed 00:09:30.714 Test: blob_io_unit ...passed 00:09:30.714 Test: blob_io_unit_compatibility ...passed 00:09:30.714 Test: blob_ext_md_pages ...passed 00:09:30.714 Test: blob_esnap_io_4096_4096 ...passed 00:09:30.714 Test: blob_esnap_io_512_512 ...passed 00:09:30.714 Test: blob_esnap_io_4096_512 ...passed 00:09:30.715 Test: blob_esnap_io_512_4096 ...passed 00:09:30.715 Suite: blob_bs_nocopy_noextent 00:09:30.715 Test: blob_open ...passed 00:09:30.715 Test: blob_create ...[2024-10-01 12:29:11.812918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:30.715 passed 00:09:30.715 Test: blob_create_loop ...passed 00:09:30.715 Test: blob_create_fail ...[2024-10-01 12:29:11.945820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:30.715 passed 00:09:30.715 Test: blob_create_internal ...passed 00:09:30.715 Test: blob_create_zero_extent ...passed 00:09:30.715 Test: blob_snapshot ...passed 00:09:30.715 Test: blob_clone ...passed 00:09:30.715 Test: blob_inflate ...[2024-10-01 12:29:12.239756] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:30.715 passed 00:09:30.715 Test: blob_delete ...passed 00:09:30.715 Test: blob_resize_test ...[2024-10-01 12:29:12.347850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:30.715 passed 00:09:30.715 Test: channel_ops ...passed 00:09:30.715 Test: blob_super ...passed 00:09:30.715 Test: blob_rw_verify_iov ...passed 00:09:30.715 Test: blob_unmap ...passed 00:09:30.715 Test: blob_iter ...passed 00:09:30.715 Test: blob_parse_md ...passed 00:09:30.715 Test: bs_load_pending_removal ...passed 00:09:30.715 Test: bs_unload ...[2024-10-01 12:29:12.777791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:30.715 passed 00:09:30.715 Test: bs_usable_clusters ...passed 00:09:30.715 Test: blob_crc ...[2024-10-01 12:29:12.885847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:30.715 [2024-10-01 12:29:12.886019] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:30.715 passed 00:09:30.715 Test: blob_flags ...passed 00:09:30.715 Test: bs_version ...passed 00:09:30.715 Test: blob_set_xattrs_test ...[2024-10-01 12:29:13.048053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:30.715 [2024-10-01 12:29:13.048186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:30.715 passed 00:09:30.715 Test: blob_thin_prov_alloc ...passed 00:09:30.974 Test: blob_insert_cluster_msg_test ...passed 00:09:30.974 Test: blob_thin_prov_rw ...passed 00:09:30.974 Test: blob_thin_prov_rle ...passed 00:09:30.974 Test: blob_thin_prov_rw_iov ...passed 00:09:30.974 Test: blob_snapshot_rw ...passed 00:09:31.233 Test: blob_snapshot_rw_iov ...passed 00:09:31.492 Test: blob_inflate_rw ...passed 00:09:31.492 Test: blob_snapshot_freeze_io ...passed 00:09:31.492 Test: blob_operation_split_rw ...passed 00:09:31.750 Test: blob_operation_split_rw_iov ...passed 00:09:31.750 Test: blob_simultaneous_operations ...[2024-10-01 12:29:14.204007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:31.750 [2024-10-01 12:29:14.204147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:31.750 [2024-10-01 12:29:14.205527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:31.750 [2024-10-01 12:29:14.205608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:31.750 [2024-10-01 12:29:14.219741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:31.750 [2024-10-01 12:29:14.219829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:31.750 [2024-10-01 12:29:14.219993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:31.750 [2024-10-01 12:29:14.220032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:31.750 passed 00:09:32.009 Test: blob_persist_test ...passed 00:09:32.009 Test: blob_decouple_snapshot ...passed 00:09:32.009 Test: blob_seek_io_unit ...passed 00:09:32.009 Test: blob_nested_freezes ...passed 00:09:32.009 Suite: blob_blob_nocopy_noextent 00:09:32.268 Test: blob_write ...passed 00:09:32.268 Test: blob_read ...passed 00:09:32.268 Test: blob_rw_verify ...passed 00:09:32.268 Test: blob_rw_verify_iov_nomem ...passed 00:09:32.268 Test: blob_rw_iov_read_only ...passed 00:09:32.528 Test: blob_xattr ...passed 00:09:32.528 Test: blob_dirty_shutdown ...passed 00:09:32.528 Test: blob_is_degraded ...passed 00:09:32.528 Suite: blob_esnap_bs_nocopy_noextent 00:09:32.528 Test: blob_esnap_create ...passed 00:09:32.788 Test: blob_esnap_thread_add_remove ...passed 00:09:32.788 Test: blob_esnap_clone_snapshot ...passed 00:09:32.788 Test: blob_esnap_clone_inflate ...passed 00:09:32.788 Test: blob_esnap_clone_decouple ...passed 00:09:32.788 Test: blob_esnap_clone_reload ...passed 00:09:33.047 Test: blob_esnap_hotplug ...passed 00:09:33.047 Suite: blob_nocopy_extent 00:09:33.047 Test: blob_init ...[2024-10-01 12:29:15.341331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:33.047 passed 00:09:33.047 Test: blob_thin_provision ...passed 00:09:33.047 Test: blob_read_only ...passed 00:09:33.047 Test: bs_load ...[2024-10-01 12:29:15.418816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:33.047 passed 00:09:33.047 Test: bs_load_custom_cluster_size ...passed 00:09:33.047 Test: bs_load_after_failed_grow ...passed 00:09:33.047 Test: bs_cluster_sz ...[2024-10-01 12:29:15.460747] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:33.047 [2024-10-01 12:29:15.461073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:33.047 [2024-10-01 12:29:15.461133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:33.047 passed 00:09:33.047 Test: bs_resize_md ...passed 00:09:33.047 Test: bs_destroy ...passed 00:09:33.047 Test: bs_type ...passed 00:09:33.307 Test: bs_super_block ...passed 00:09:33.307 Test: bs_test_recover_cluster_count ...passed 00:09:33.307 Test: bs_grow_live ...passed 00:09:33.307 Test: bs_grow_live_no_space ...passed 00:09:33.307 Test: bs_test_grow ...passed 00:09:33.307 Test: blob_serialize_test ...passed 00:09:33.307 Test: super_block_crc ...passed 00:09:33.307 Test: blob_thin_prov_write_count_io ...passed 00:09:33.307 Test: bs_load_iter_test ...passed 00:09:33.307 Test: blob_relations ...[2024-10-01 12:29:15.701151] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:33.307 [2024-10-01 12:29:15.701282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.307 [2024-10-01 12:29:15.702210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:33.307 [2024-10-01 12:29:15.702288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.307 passed 00:09:33.307 Test: blob_relations2 ...[2024-10-01 12:29:15.723560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:33.307 [2024-10-01 12:29:15.723657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.307 [2024-10-01 12:29:15.723689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:33.307 [2024-10-01 12:29:15.723726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.307 [2024-10-01 12:29:15.725249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:33.307 [2024-10-01 12:29:15.725325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.307 [2024-10-01 12:29:15.725733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:33.307 [2024-10-01 12:29:15.725787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.307 passed 00:09:33.307 Test: blob_relations3 ...passed 00:09:33.566 Test: blobstore_clean_power_failure ...passed 00:09:33.566 Test: blob_delete_snapshot_power_failure ...[2024-10-01 12:29:15.981306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:33.566 [2024-10-01 12:29:16.000780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:33.566 [2024-10-01 12:29:16.020288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:33.566 [2024-10-01 12:29:16.020401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:33.566 [2024-10-01 12:29:16.020437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.566 [2024-10-01 12:29:16.039750] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:33.566 [2024-10-01 12:29:16.039883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:33.566 [2024-10-01 12:29:16.039930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:33.566 [2024-10-01 12:29:16.039965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.566 [2024-10-01 12:29:16.059460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:33.566 [2024-10-01 12:29:16.059578] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:33.566 [2024-10-01 12:29:16.059624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:33.566 [2024-10-01 12:29:16.059682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.566 [2024-10-01 12:29:16.079241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:33.566 [2024-10-01 12:29:16.079387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.824 [2024-10-01 12:29:16.099082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:33.825 [2024-10-01 12:29:16.099228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.825 [2024-10-01 12:29:16.118923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:33.825 [2024-10-01 12:29:16.119048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:33.825 passed 00:09:33.825 Test: blob_create_snapshot_power_failure ...[2024-10-01 12:29:16.177304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:33.825 [2024-10-01 12:29:16.196324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:33.825 [2024-10-01 12:29:16.234466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:33.825 [2024-10-01 12:29:16.254319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:33.825 passed 00:09:33.825 Test: blob_io_unit ...passed 00:09:33.825 Test: blob_io_unit_compatibility ...passed 00:09:34.083 Test: blob_ext_md_pages ...passed 00:09:34.083 Test: blob_esnap_io_4096_4096 ...passed 00:09:34.083 Test: blob_esnap_io_512_512 ...passed 00:09:34.083 Test: blob_esnap_io_4096_512 ...passed 00:09:34.083 Test: blob_esnap_io_512_4096 ...passed 00:09:34.083 Suite: blob_bs_nocopy_extent 00:09:34.083 Test: blob_open ...passed 00:09:34.342 Test: blob_create ...[2024-10-01 12:29:16.630413] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:34.342 passed 00:09:34.342 Test: blob_create_loop ...passed 00:09:34.342 Test: blob_create_fail ...[2024-10-01 12:29:16.773476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:34.342 passed 00:09:34.342 Test: blob_create_internal ...passed 00:09:34.601 Test: blob_create_zero_extent ...passed 00:09:34.601 Test: blob_snapshot ...passed 00:09:34.601 Test: blob_clone ...passed 00:09:34.601 Test: blob_inflate ...[2024-10-01 12:29:17.064949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:34.601 passed 00:09:34.860 Test: blob_delete ...passed 00:09:34.860 Test: blob_resize_test ...[2024-10-01 12:29:17.173042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:34.860 passed 00:09:34.860 Test: channel_ops ...passed 00:09:34.860 Test: blob_super ...passed 00:09:34.861 Test: blob_rw_verify_iov ...passed 00:09:35.119 Test: blob_unmap ...passed 00:09:35.119 Test: blob_iter ...passed 00:09:35.119 Test: blob_parse_md ...passed 00:09:35.119 Test: bs_load_pending_removal ...passed 00:09:35.119 Test: bs_unload ...[2024-10-01 12:29:17.611165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:35.119 passed 00:09:35.378 Test: bs_usable_clusters ...passed 00:09:35.378 Test: blob_crc ...[2024-10-01 12:29:17.721073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:35.378 [2024-10-01 12:29:17.721208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:35.378 passed 00:09:35.378 Test: blob_flags ...passed 00:09:35.378 Test: bs_version ...passed 00:09:35.378 Test: blob_set_xattrs_test ...[2024-10-01 12:29:17.888549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:35.378 [2024-10-01 12:29:17.888665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:35.378 passed 00:09:35.638 Test: blob_thin_prov_alloc ...passed 00:09:35.638 Test: blob_insert_cluster_msg_test ...passed 00:09:35.638 Test: blob_thin_prov_rw ...passed 00:09:35.935 Test: blob_thin_prov_rle ...passed 00:09:35.935 Test: blob_thin_prov_rw_iov ...passed 00:09:35.935 Test: blob_snapshot_rw ...passed 00:09:35.935 Test: blob_snapshot_rw_iov ...passed 00:09:36.194 Test: blob_inflate_rw ...passed 00:09:36.194 Test: blob_snapshot_freeze_io ...passed 00:09:36.453 Test: blob_operation_split_rw ...passed 00:09:36.712 Test: blob_operation_split_rw_iov ...passed 00:09:36.712 Test: blob_simultaneous_operations ...[2024-10-01 12:29:19.031822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:36.712 [2024-10-01 12:29:19.031969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:36.712 [2024-10-01 12:29:19.033357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:36.712 [2024-10-01 12:29:19.033429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:36.712 [2024-10-01 12:29:19.047659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:36.712 [2024-10-01 12:29:19.047739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:36.712 [2024-10-01 12:29:19.047868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:36.712 [2024-10-01 12:29:19.047922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:36.712 passed 00:09:36.712 Test: blob_persist_test ...passed 00:09:36.712 Test: blob_decouple_snapshot ...passed 00:09:36.970 Test: blob_seek_io_unit ...passed 00:09:36.970 Test: blob_nested_freezes ...passed 00:09:36.970 Suite: blob_blob_nocopy_extent 00:09:36.970 Test: blob_write ...passed 00:09:36.970 Test: blob_read ...passed 00:09:37.229 Test: blob_rw_verify ...passed 00:09:37.229 Test: blob_rw_verify_iov_nomem ...passed 00:09:37.229 Test: blob_rw_iov_read_only ...passed 00:09:37.229 Test: blob_xattr ...passed 00:09:37.229 Test: blob_dirty_shutdown ...passed 00:09:37.491 Test: blob_is_degraded ...passed 00:09:37.491 Suite: blob_esnap_bs_nocopy_extent 00:09:37.491 Test: blob_esnap_create ...passed 00:09:37.491 Test: blob_esnap_thread_add_remove ...passed 00:09:37.492 Test: blob_esnap_clone_snapshot ...passed 00:09:37.492 Test: blob_esnap_clone_inflate ...passed 00:09:37.751 Test: blob_esnap_clone_decouple ...passed 00:09:37.751 Test: blob_esnap_clone_reload ...passed 00:09:37.751 Test: blob_esnap_hotplug ...passed 00:09:37.751 Suite: blob_copy_noextent 00:09:37.751 Test: blob_init ...[2024-10-01 12:29:20.174802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:37.751 passed 00:09:37.751 Test: blob_thin_provision ...passed 00:09:37.751 Test: blob_read_only ...passed 00:09:37.751 Test: bs_load ...[2024-10-01 12:29:20.248259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:37.751 passed 00:09:37.751 Test: bs_load_custom_cluster_size ...passed 00:09:38.009 Test: bs_load_after_failed_grow ...passed 00:09:38.009 Test: bs_cluster_sz ...[2024-10-01 12:29:20.286245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:38.009 [2024-10-01 12:29:20.286460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:38.009 [2024-10-01 12:29:20.286509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:38.009 passed 00:09:38.009 Test: bs_resize_md ...passed 00:09:38.009 Test: bs_destroy ...passed 00:09:38.009 Test: bs_type ...passed 00:09:38.009 Test: bs_super_block ...passed 00:09:38.009 Test: bs_test_recover_cluster_count ...passed 00:09:38.009 Test: bs_grow_live ...passed 00:09:38.009 Test: bs_grow_live_no_space ...passed 00:09:38.009 Test: bs_test_grow ...passed 00:09:38.009 Test: blob_serialize_test ...passed 00:09:38.009 Test: super_block_crc ...passed 00:09:38.009 Test: blob_thin_prov_write_count_io ...passed 00:09:38.009 Test: bs_load_iter_test ...passed 00:09:38.009 Test: blob_relations ...[2024-10-01 12:29:20.521618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:38.009 [2024-10-01 12:29:20.521739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.009 [2024-10-01 12:29:20.522302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:38.009 [2024-10-01 12:29:20.522339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.009 passed 00:09:38.267 Test: blob_relations2 ...[2024-10-01 12:29:20.542533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:38.267 [2024-10-01 12:29:20.542626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.267 [2024-10-01 12:29:20.542654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:38.267 [2024-10-01 12:29:20.542670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.267 [2024-10-01 12:29:20.543519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:38.267 [2024-10-01 12:29:20.543579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.267 [2024-10-01 12:29:20.543856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:38.267 [2024-10-01 12:29:20.543930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.267 passed 00:09:38.267 Test: blob_relations3 ...passed 00:09:38.267 Test: blobstore_clean_power_failure ...passed 00:09:38.526 Test: blob_delete_snapshot_power_failure ...[2024-10-01 12:29:20.798463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:38.526 [2024-10-01 12:29:20.817458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:38.526 [2024-10-01 12:29:20.817574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:38.526 [2024-10-01 12:29:20.817615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.526 [2024-10-01 12:29:20.836473] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:38.526 [2024-10-01 12:29:20.836575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:38.526 [2024-10-01 12:29:20.836626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:38.526 [2024-10-01 12:29:20.836655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.526 [2024-10-01 12:29:20.855608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:38.526 [2024-10-01 12:29:20.855742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.526 [2024-10-01 12:29:20.874991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:38.526 [2024-10-01 12:29:20.875131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.527 [2024-10-01 12:29:20.894288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:38.527 [2024-10-01 12:29:20.894423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:38.527 passed 00:09:38.527 Test: blob_create_snapshot_power_failure ...[2024-10-01 12:29:20.951716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:38.527 [2024-10-01 12:29:20.989624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:38.527 [2024-10-01 12:29:21.008842] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:38.785 passed 00:09:38.785 Test: blob_io_unit ...passed 00:09:38.785 Test: blob_io_unit_compatibility ...passed 00:09:38.785 Test: blob_ext_md_pages ...passed 00:09:38.785 Test: blob_esnap_io_4096_4096 ...passed 00:09:38.785 Test: blob_esnap_io_512_512 ...passed 00:09:38.785 Test: blob_esnap_io_4096_512 ...passed 00:09:38.785 Test: blob_esnap_io_512_4096 ...passed 00:09:38.785 Suite: blob_bs_copy_noextent 00:09:39.044 Test: blob_open ...passed 00:09:39.044 Test: blob_create ...[2024-10-01 12:29:21.390202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:39.044 passed 00:09:39.044 Test: blob_create_loop ...passed 00:09:39.044 Test: blob_create_fail ...[2024-10-01 12:29:21.524080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:39.044 passed 00:09:39.303 Test: blob_create_internal ...passed 00:09:39.303 Test: blob_create_zero_extent ...passed 00:09:39.303 Test: blob_snapshot ...passed 00:09:39.303 Test: blob_clone ...passed 00:09:39.303 Test: blob_inflate ...[2024-10-01 12:29:21.810744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:39.303 passed 00:09:39.561 Test: blob_delete ...passed 00:09:39.561 Test: blob_resize_test ...[2024-10-01 12:29:21.916258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:39.561 passed 00:09:39.561 Test: channel_ops ...passed 00:09:39.561 Test: blob_super ...passed 00:09:39.819 Test: blob_rw_verify_iov ...passed 00:09:39.819 Test: blob_unmap ...passed 00:09:39.819 Test: blob_iter ...passed 00:09:39.819 Test: blob_parse_md ...passed 00:09:39.819 Test: bs_load_pending_removal ...passed 00:09:40.077 Test: bs_unload ...[2024-10-01 12:29:22.349638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:40.077 passed 00:09:40.077 Test: bs_usable_clusters ...passed 00:09:40.077 Test: blob_crc ...[2024-10-01 12:29:22.462370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:40.077 [2024-10-01 12:29:22.462520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:40.077 passed 00:09:40.077 Test: blob_flags ...passed 00:09:40.077 Test: bs_version ...passed 00:09:40.336 Test: blob_set_xattrs_test ...[2024-10-01 12:29:22.627964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:40.336 [2024-10-01 12:29:22.628123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:40.336 passed 00:09:40.336 Test: blob_thin_prov_alloc ...passed 00:09:40.336 Test: blob_insert_cluster_msg_test ...passed 00:09:40.594 Test: blob_thin_prov_rw ...passed 00:09:40.594 Test: blob_thin_prov_rle ...passed 00:09:40.594 Test: blob_thin_prov_rw_iov ...passed 00:09:40.594 Test: blob_snapshot_rw ...passed 00:09:40.853 Test: blob_snapshot_rw_iov ...passed 00:09:40.853 Test: blob_inflate_rw ...passed 00:09:41.115 Test: blob_snapshot_freeze_io ...passed 00:09:41.115 Test: blob_operation_split_rw ...passed 00:09:41.375 Test: blob_operation_split_rw_iov ...passed 00:09:41.375 Test: blob_simultaneous_operations ...[2024-10-01 12:29:23.775872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:41.375 [2024-10-01 12:29:23.776015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:41.375 [2024-10-01 12:29:23.776551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:41.375 [2024-10-01 12:29:23.776615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:41.375 [2024-10-01 12:29:23.779840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:41.375 [2024-10-01 12:29:23.779916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:41.375 [2024-10-01 12:29:23.780035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:41.375 [2024-10-01 12:29:23.780055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:41.375 passed 00:09:41.375 Test: blob_persist_test ...passed 00:09:41.634 Test: blob_decouple_snapshot ...passed 00:09:41.634 Test: blob_seek_io_unit ...passed 00:09:41.634 Test: blob_nested_freezes ...passed 00:09:41.634 Suite: blob_blob_copy_noextent 00:09:41.634 Test: blob_write ...passed 00:09:41.634 Test: blob_read ...passed 00:09:41.893 Test: blob_rw_verify ...passed 00:09:41.893 Test: blob_rw_verify_iov_nomem ...passed 00:09:41.893 Test: blob_rw_iov_read_only ...passed 00:09:41.893 Test: blob_xattr ...passed 00:09:41.893 Test: blob_dirty_shutdown ...passed 00:09:42.152 Test: blob_is_degraded ...passed 00:09:42.152 Suite: blob_esnap_bs_copy_noextent 00:09:42.152 Test: blob_esnap_create ...passed 00:09:42.152 Test: blob_esnap_thread_add_remove ...passed 00:09:42.152 Test: blob_esnap_clone_snapshot ...passed 00:09:42.410 Test: blob_esnap_clone_inflate ...passed 00:09:42.410 Test: blob_esnap_clone_decouple ...passed 00:09:42.410 Test: blob_esnap_clone_reload ...passed 00:09:42.410 Test: blob_esnap_hotplug ...passed 00:09:42.410 Suite: blob_copy_extent 00:09:42.410 Test: blob_init ...[2024-10-01 12:29:24.849513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:42.410 passed 00:09:42.410 Test: blob_thin_provision ...passed 00:09:42.410 Test: blob_read_only ...passed 00:09:42.410 Test: bs_load ...[2024-10-01 12:29:24.921912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:42.410 passed 00:09:42.669 Test: bs_load_custom_cluster_size ...passed 00:09:42.669 Test: bs_load_after_failed_grow ...passed 00:09:42.669 Test: bs_cluster_sz ...[2024-10-01 12:29:24.960193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:42.669 [2024-10-01 12:29:24.960393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:42.669 [2024-10-01 12:29:24.960435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:42.669 passed 00:09:42.669 Test: bs_resize_md ...passed 00:09:42.669 Test: bs_destroy ...passed 00:09:42.669 Test: bs_type ...passed 00:09:42.669 Test: bs_super_block ...passed 00:09:42.669 Test: bs_test_recover_cluster_count ...passed 00:09:42.669 Test: bs_grow_live ...passed 00:09:42.669 Test: bs_grow_live_no_space ...passed 00:09:42.669 Test: bs_test_grow ...passed 00:09:42.669 Test: blob_serialize_test ...passed 00:09:42.669 Test: super_block_crc ...passed 00:09:42.669 Test: blob_thin_prov_write_count_io ...passed 00:09:42.669 Test: bs_load_iter_test ...passed 00:09:42.669 Test: blob_relations ...[2024-10-01 12:29:25.189630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:42.669 [2024-10-01 12:29:25.189760] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:42.669 [2024-10-01 12:29:25.190668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:42.669 [2024-10-01 12:29:25.190726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:42.669 passed 00:09:42.927 Test: blob_relations2 ...[2024-10-01 12:29:25.211033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:42.927 [2024-10-01 12:29:25.211136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:42.927 [2024-10-01 12:29:25.211183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:42.928 [2024-10-01 12:29:25.211213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:42.928 [2024-10-01 12:29:25.212535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:42.928 [2024-10-01 12:29:25.212602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:42.928 [2024-10-01 12:29:25.213003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:42.928 [2024-10-01 12:29:25.213053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:42.928 passed 00:09:42.928 Test: blob_relations3 ...passed 00:09:42.928 Test: blobstore_clean_power_failure ...passed 00:09:43.187 Test: blob_delete_snapshot_power_failure ...[2024-10-01 12:29:25.460464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:43.187 [2024-10-01 12:29:25.479596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:43.187 [2024-10-01 12:29:25.498716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:43.187 [2024-10-01 12:29:25.498844] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:43.187 [2024-10-01 12:29:25.498879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:43.187 [2024-10-01 12:29:25.521569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:43.187 [2024-10-01 12:29:25.521670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:43.187 [2024-10-01 12:29:25.521694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:43.187 [2024-10-01 12:29:25.521724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:43.187 [2024-10-01 12:29:25.540484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:43.187 [2024-10-01 12:29:25.540589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:43.187 [2024-10-01 12:29:25.540614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:43.187 [2024-10-01 12:29:25.540645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:43.187 [2024-10-01 12:29:25.559274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:43.187 [2024-10-01 12:29:25.559403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:43.187 [2024-10-01 12:29:25.578304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:43.187 [2024-10-01 12:29:25.578429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:43.187 [2024-10-01 12:29:25.597498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:43.187 [2024-10-01 12:29:25.597615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:43.187 passed 00:09:43.187 Test: blob_create_snapshot_power_failure ...[2024-10-01 12:29:25.654164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:43.187 [2024-10-01 12:29:25.672955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:43.187 [2024-10-01 12:29:25.709941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:43.446 [2024-10-01 12:29:25.728642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:43.446 passed 00:09:43.446 Test: blob_io_unit ...passed 00:09:43.446 Test: blob_io_unit_compatibility ...passed 00:09:43.446 Test: blob_ext_md_pages ...passed 00:09:43.446 Test: blob_esnap_io_4096_4096 ...passed 00:09:43.446 Test: blob_esnap_io_512_512 ...passed 00:09:43.446 Test: blob_esnap_io_4096_512 ...passed 00:09:43.704 Test: blob_esnap_io_512_4096 ...passed 00:09:43.704 Suite: blob_bs_copy_extent 00:09:43.704 Test: blob_open ...passed 00:09:43.704 Test: blob_create ...[2024-10-01 12:29:26.096596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:43.704 passed 00:09:43.704 Test: blob_create_loop ...passed 00:09:43.704 Test: blob_create_fail ...[2024-10-01 12:29:26.230123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:43.961 passed 00:09:43.961 Test: blob_create_internal ...passed 00:09:43.961 Test: blob_create_zero_extent ...passed 00:09:43.961 Test: blob_snapshot ...passed 00:09:43.961 Test: blob_clone ...passed 00:09:44.221 Test: blob_inflate ...[2024-10-01 12:29:26.505482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:44.221 passed 00:09:44.221 Test: blob_delete ...passed 00:09:44.221 Test: blob_resize_test ...[2024-10-01 12:29:26.609159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:44.221 passed 00:09:44.221 Test: channel_ops ...passed 00:09:44.221 Test: blob_super ...passed 00:09:44.479 Test: blob_rw_verify_iov ...passed 00:09:44.479 Test: blob_unmap ...passed 00:09:44.479 Test: blob_iter ...passed 00:09:44.479 Test: blob_parse_md ...passed 00:09:44.479 Test: bs_load_pending_removal ...passed 00:09:44.738 Test: bs_unload ...[2024-10-01 12:29:27.042614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:44.738 passed 00:09:44.738 Test: bs_usable_clusters ...passed 00:09:44.738 Test: blob_crc ...[2024-10-01 12:29:27.151162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:44.738 [2024-10-01 12:29:27.151300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:44.738 passed 00:09:44.738 Test: blob_flags ...passed 00:09:44.997 Test: bs_version ...passed 00:09:44.997 Test: blob_set_xattrs_test ...[2024-10-01 12:29:27.314187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:44.997 [2024-10-01 12:29:27.314306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:44.997 passed 00:09:44.997 Test: blob_thin_prov_alloc ...passed 00:09:44.997 Test: blob_insert_cluster_msg_test ...passed 00:09:45.255 Test: blob_thin_prov_rw ...passed 00:09:45.255 Test: blob_thin_prov_rle ...passed 00:09:45.255 Test: blob_thin_prov_rw_iov ...passed 00:09:45.255 Test: blob_snapshot_rw ...passed 00:09:45.514 Test: blob_snapshot_rw_iov ...passed 00:09:45.514 Test: blob_inflate_rw ...passed 00:09:45.773 Test: blob_snapshot_freeze_io ...passed 00:09:45.773 Test: blob_operation_split_rw ...passed 00:09:46.032 Test: blob_operation_split_rw_iov ...passed 00:09:46.032 Test: blob_simultaneous_operations ...[2024-10-01 12:29:28.418184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:46.032 [2024-10-01 12:29:28.418306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:46.032 [2024-10-01 12:29:28.418802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:46.032 [2024-10-01 12:29:28.418863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:46.032 [2024-10-01 12:29:28.421973] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:46.032 [2024-10-01 12:29:28.422039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:46.032 [2024-10-01 12:29:28.422161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:46.032 [2024-10-01 12:29:28.422186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:46.032 passed 00:09:46.032 Test: blob_persist_test ...passed 00:09:46.291 Test: blob_decouple_snapshot ...passed 00:09:46.291 Test: blob_seek_io_unit ...passed 00:09:46.291 Test: blob_nested_freezes ...passed 00:09:46.291 Suite: blob_blob_copy_extent 00:09:46.291 Test: blob_write ...passed 00:09:46.291 Test: blob_read ...passed 00:09:46.549 Test: blob_rw_verify ...passed 00:09:46.549 Test: blob_rw_verify_iov_nomem ...passed 00:09:46.549 Test: blob_rw_iov_read_only ...passed 00:09:46.549 Test: blob_xattr ...passed 00:09:46.549 Test: blob_dirty_shutdown ...passed 00:09:46.807 Test: blob_is_degraded ...passed 00:09:46.807 Suite: blob_esnap_bs_copy_extent 00:09:46.807 Test: blob_esnap_create ...passed 00:09:46.807 Test: blob_esnap_thread_add_remove ...passed 00:09:46.807 Test: blob_esnap_clone_snapshot ...passed 00:09:47.066 Test: blob_esnap_clone_inflate ...passed 00:09:47.066 Test: blob_esnap_clone_decouple ...passed 00:09:47.066 Test: blob_esnap_clone_reload ...passed 00:09:47.066 Test: blob_esnap_hotplug ...passed 00:09:47.066 00:09:47.066 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.066 suites 16 16 n/a 0 0 00:09:47.066 tests 348 348 348 0 0 00:09:47.066 asserts 92605 92605 92605 0 n/a 00:09:47.066 00:09:47.066 Elapsed time = 18.972 seconds 00:09:47.325 12:29:29 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:09:47.325 00:09:47.325 00:09:47.325 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.325 http://cunit.sourceforge.net/ 00:09:47.325 00:09:47.325 00:09:47.325 Suite: blob_bdev 00:09:47.325 Test: create_bs_dev ...passed 00:09:47.325 Test: create_bs_dev_ro ...[2024-10-01 12:29:29.641160] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:09:47.325 passed 00:09:47.325 Test: create_bs_dev_rw ...passed 00:09:47.325 Test: claim_bs_dev ...[2024-10-01 12:29:29.641804] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:09:47.325 passed 00:09:47.325 Test: claim_bs_dev_ro ...passed 00:09:47.325 Test: deferred_destroy_refs ...passed 00:09:47.325 Test: deferred_destroy_channels ...passed 00:09:47.325 Test: deferred_destroy_threads ...passed 00:09:47.325 00:09:47.325 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.325 suites 1 1 n/a 0 0 00:09:47.325 tests 8 8 8 0 0 00:09:47.325 asserts 119 119 119 0 n/a 00:09:47.325 00:09:47.325 Elapsed time = 0.001 seconds 00:09:47.325 12:29:29 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:09:47.325 00:09:47.325 00:09:47.325 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.325 http://cunit.sourceforge.net/ 00:09:47.325 00:09:47.325 00:09:47.325 Suite: tree 00:09:47.325 Test: blobfs_tree_op_test ...passed 00:09:47.325 00:09:47.325 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.325 suites 1 1 n/a 0 0 00:09:47.325 tests 1 1 1 0 0 00:09:47.325 asserts 27 27 27 0 n/a 00:09:47.325 00:09:47.325 Elapsed time = 0.000 seconds 00:09:47.325 12:29:29 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:09:47.325 00:09:47.325 00:09:47.325 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.325 http://cunit.sourceforge.net/ 00:09:47.325 00:09:47.325 00:09:47.325 Suite: blobfs_async_ut 00:09:47.325 Test: fs_init ...passed 00:09:47.325 Test: fs_open ...passed 00:09:47.325 Test: fs_create ...passed 00:09:47.325 Test: fs_truncate ...passed 00:09:47.325 Test: fs_rename ...[2024-10-01 12:29:29.850041] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:09:47.325 passed 00:09:47.584 Test: fs_rw_async ...passed 00:09:47.584 Test: fs_writev_readv_async ...passed 00:09:47.584 Test: tree_find_buffer_ut ...passed 00:09:47.584 Test: channel_ops ...passed 00:09:47.584 Test: channel_ops_sync ...passed 00:09:47.584 00:09:47.584 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.584 suites 1 1 n/a 0 0 00:09:47.584 tests 10 10 10 0 0 00:09:47.584 asserts 292 292 292 0 n/a 00:09:47.584 00:09:47.584 Elapsed time = 0.157 seconds 00:09:47.584 12:29:29 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:09:47.584 00:09:47.584 00:09:47.584 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.584 http://cunit.sourceforge.net/ 00:09:47.584 00:09:47.584 00:09:47.584 Suite: blobfs_sync_ut 00:09:47.584 Test: cache_read_after_write ...[2024-10-01 12:29:30.034485] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:09:47.584 passed 00:09:47.584 Test: file_length ...passed 00:09:47.584 Test: append_write_to_extend_blob ...passed 00:09:47.584 Test: partial_buffer ...passed 00:09:47.584 Test: cache_write_null_buffer ...passed 00:09:47.584 Test: fs_create_sync ...passed 00:09:47.844 Test: fs_rename_sync ...passed 00:09:47.844 Test: cache_append_no_cache ...passed 00:09:47.844 Test: fs_delete_file_without_close ...passed 00:09:47.844 00:09:47.844 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.844 suites 1 1 n/a 0 0 00:09:47.844 tests 9 9 9 0 0 00:09:47.844 asserts 345 345 345 0 n/a 00:09:47.844 00:09:47.844 Elapsed time = 0.357 seconds 00:09:47.844 12:29:30 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:09:47.844 00:09:47.844 00:09:47.844 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.844 http://cunit.sourceforge.net/ 00:09:47.844 00:09:47.844 00:09:47.844 Suite: blobfs_bdev_ut 00:09:47.844 Test: spdk_blobfs_bdev_detect_test ...[2024-10-01 12:29:30.232775] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:47.844 passed 00:09:47.844 Test: spdk_blobfs_bdev_create_test ...[2024-10-01 12:29:30.234485] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:47.844 passed 00:09:47.844 Test: spdk_blobfs_bdev_mount_test ...passed 00:09:47.844 00:09:47.844 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.844 suites 1 1 n/a 0 0 00:09:47.844 tests 3 3 3 0 0 00:09:47.844 asserts 9 9 9 0 n/a 00:09:47.844 00:09:47.844 Elapsed time = 0.002 seconds 00:09:47.844 00:09:47.844 real 0m19.761s 00:09:47.844 user 0m19.052s 00:09:47.844 sys 0m0.906s 00:09:47.844 12:29:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.844 12:29:30 -- common/autotest_common.sh@10 -- # set +x 00:09:47.844 ************************************ 00:09:47.844 END TEST unittest_blob_blobfs 00:09:47.844 ************************************ 00:09:47.844 12:29:30 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:09:47.844 12:29:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:47.844 12:29:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:47.844 12:29:30 -- common/autotest_common.sh@10 -- # set +x 00:09:47.844 ************************************ 00:09:47.844 START TEST unittest_event 00:09:47.844 ************************************ 00:09:47.844 12:29:30 -- common/autotest_common.sh@1104 -- # unittest_event 00:09:47.844 12:29:30 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:09:47.844 00:09:47.844 00:09:47.844 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.844 http://cunit.sourceforge.net/ 00:09:47.844 00:09:47.844 00:09:47.844 Suite: app_suite 00:09:47.844 Test: test_spdk_app_parse_args ...app_ut [options] 00:09:47.844 options: 00:09:47.844 -c, --config JSON config file (default none) 00:09:47.844 --json JSON config file (default none) 00:09:47.844 --json-ignore-init-errors 00:09:47.844 don't exit on invalid config entry 00:09:47.844 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:47.844 -g, --single-file-segments 00:09:47.844 force creating just one hugetlbfs file 00:09:47.844 -h, --help show this usage 00:09:47.844 -i, --shm-id shared memory ID (optional) 00:09:47.844 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:47.844 --lcores lcore to CPU mapping list. The list is in the format: 00:09:47.844 [<,lcores[@CPUs]>...] 00:09:47.844 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:47.844 Within the group, '-' is used for range separator, 00:09:47.844 ',' is used for single number separator. 00:09:47.844 '( )' can be omitted for single element group, 00:09:47.844 '@' can be omitted if cpus and lcores have the same value 00:09:47.844 -n, --mem-channels channel number of memory channels used for DPDK 00:09:47.844 -p, --main-core main (primary) core for DPDK 00:09:47.844 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:47.844 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:47.844 app_ut: invalid option -- 'z' 00:09:47.844 --disable-cpumask-locks Disable CPU core lock files. 00:09:47.844 --silence-noticelog disable notice level logging to stderr 00:09:47.844 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:47.844 -u, --no-pci disable PCI access 00:09:47.844 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:47.844 --max-delay maximum reactor delay (in microseconds) 00:09:47.844 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:47.844 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:47.844 -R, --huge-unlink unlink huge files after initialization 00:09:47.844 -v, --version print SPDK version 00:09:47.844 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:47.844 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:47.844 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:47.844 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:47.844 Tracepoints vary in size and can use more than one trace entry. 00:09:47.844 --rpcs-allowed comma-separated list of permitted RPCS 00:09:47.844 --env-context Opaque context for use of the env implementation 00:09:47.844 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:47.844 --no-huge run without using hugepages 00:09:47.844 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:09:47.844 -e, --tpoint-group [:] 00:09:47.844 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:09:47.844 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:47.844 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:09:47.844 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:47.844 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:47.844 app_ut [options] 00:09:47.844 options: 00:09:47.844 -c, --config JSON config file (default none) 00:09:47.844 --json JSON config file (default none) 00:09:47.844 --json-ignore-init-errors 00:09:47.844 don't exit on invalid config entry 00:09:47.844 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:47.844 -g, --single-file-segments 00:09:47.844 force creating just one hugetlbfs file 00:09:47.844 -h, --help show this usage 00:09:47.844 -i, --shm-id shared memory ID (optional) 00:09:47.844 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:47.844 --lcores lcore to CPU mapping list. The list is in the format: 00:09:47.844 [<,lcores[@CPUs]>...] 00:09:47.844 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:47.844 Within the group, '-' is used for range separator, 00:09:47.844 ',' is used for single number separator. 00:09:47.844 '( )' can be omitted for single element group,app_ut: unrecognized option '--test-long-opt' 00:09:47.844 00:09:47.844 '@' can be omitted if cpus and lcores have the same value 00:09:47.844 -n, --mem-channels channel number of memory channels used for DPDK 00:09:47.844 -p, --main-core main (primary) core for DPDK 00:09:47.844 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:47.844 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:47.844 --disable-cpumask-locks Disable CPU core lock files. 00:09:47.844 --silence-noticelog disable notice level logging to stderr 00:09:47.844 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:47.844 -u, --no-pci disable PCI access 00:09:47.844 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:47.844 --max-delay maximum reactor delay (in microseconds) 00:09:47.844 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:47.844 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:47.845 -R, --huge-unlink unlink huge files after initialization 00:09:47.845 -v, --version print SPDK version 00:09:47.845 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:47.845 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:47.845 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:47.845 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:47.845 Tracepoints vary in size and can use more than one trace entry. 00:09:47.845 --rpcs-allowed comma-separated list of permitted RPCS 00:09:47.845 --env-context Opaque context for use of the env implementation 00:09:47.845 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:47.845 --no-huge run without using hugepages 00:09:47.845 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:09:47.845 -e, --tpoint-group [:] 00:09:47.845 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:09:47.845 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:47.845 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:09:47.845 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:47.845 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:47.845 [2024-10-01 12:29:30.344602] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:09:47.845 app_ut [options] 00:09:47.845 options: 00:09:47.845 -c, --config JSON config file (default none) 00:09:47.845 --json JSON config file (default none) 00:09:47.845 --json-ignore-init-errors 00:09:47.845 don't exit on invalid config entry 00:09:47.845 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:47.845 -g, --single-file-segments 00:09:47.845 force creating just one hugetlbfs file 00:09:47.845 -h, --help show this usage 00:09:47.845 -i, --shm-id shared memory ID (optional) 00:09:47.845 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:47.845 --lcores lcore to CPU mapping list. The list is in the format: 00:09:47.845 [<,lcores[@CPUs]>...][2024-10-01 12:29:30.345077] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:09:47.845 00:09:47.845 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:47.845 Within the group, '-' is used for range separator, 00:09:47.845 ',' is used for single number separator. 00:09:47.845 '( )' can be omitted for single element group, 00:09:47.845 '@' can be omitted if cpus and lcores have the same value 00:09:47.845 -n, --mem-channels channel number of memory channels used for DPDK 00:09:47.845 -p, --main-core main (primary) core for DPDK 00:09:47.845 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:47.845 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:47.845 --disable-cpumask-locks Disable CPU core lock files. 00:09:47.845 --silence-noticelog disable notice level logging to stderr 00:09:47.845 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:47.845 -u, --no-pci disable PCI access 00:09:47.845 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:47.845 --max-delay maximum reactor delay (in microseconds) 00:09:47.845 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:47.845 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:47.845 -R, --huge-unlink unlink huge files after initialization 00:09:47.845 -v, --version print SPDK version 00:09:47.845 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:47.845 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:47.845 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:47.845 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:47.845 Tracepoints vary in size and can use more than one trace entry. 00:09:47.845 --rpcs-allowed comma-separated list of permitted RPCS 00:09:47.845 --env-context Opaque context for use of the env implementation 00:09:47.845 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:47.845 --no-huge run without using hugepages 00:09:47.845 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:09:47.845 -e, --tpoint-group [:] 00:09:47.845 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:09:47.845 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:47.845 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:09:47.845 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:47.845 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:47.845 passed 00:09:47.845 00:09:47.845 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.845 suites 1 1 n/a 0 0 00:09:47.845 tests 1 1 1 0 0 00:09:47.845 asserts 8 8 8 0 n/a 00:09:47.845 00:09:47.845 Elapsed time = 0.002 seconds 00:09:47.845 [2024-10-01 12:29:30.345470] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:09:48.105 12:29:30 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:09:48.105 00:09:48.105 00:09:48.105 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.105 http://cunit.sourceforge.net/ 00:09:48.105 00:09:48.105 00:09:48.105 Suite: app_suite 00:09:48.105 Test: test_create_reactor ...passed 00:09:48.105 Test: test_init_reactors ...passed 00:09:48.105 Test: test_event_call ...passed 00:09:48.105 Test: test_schedule_thread ...passed 00:09:48.105 Test: test_reschedule_thread ...passed 00:09:48.105 Test: test_bind_thread ...passed 00:09:48.105 Test: test_for_each_reactor ...passed 00:09:48.105 Test: test_reactor_stats ...passed 00:09:48.105 Test: test_scheduler ...passed 00:09:48.105 Test: test_governor ...passed 00:09:48.105 00:09:48.105 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.105 suites 1 1 n/a 0 0 00:09:48.105 tests 10 10 10 0 0 00:09:48.105 asserts 344 344 344 0 n/a 00:09:48.105 00:09:48.105 Elapsed time = 0.018 seconds 00:09:48.105 00:09:48.105 real 0m0.132s 00:09:48.105 user 0m0.085s 00:09:48.105 sys 0m0.049s 00:09:48.105 12:29:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.105 12:29:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.105 ************************************ 00:09:48.105 END TEST unittest_event 00:09:48.105 ************************************ 00:09:48.105 12:29:30 -- unit/unittest.sh@233 -- # uname -s 00:09:48.105 12:29:30 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:09:48.105 12:29:30 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:09:48.105 12:29:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:48.105 12:29:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:48.105 12:29:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.105 ************************************ 00:09:48.105 START TEST unittest_ftl 00:09:48.105 ************************************ 00:09:48.105 12:29:30 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:09:48.106 12:29:30 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:09:48.106 00:09:48.106 00:09:48.106 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.106 http://cunit.sourceforge.net/ 00:09:48.106 00:09:48.106 00:09:48.106 Suite: ftl_band_suite 00:09:48.106 Test: test_band_block_offset_from_addr_base ...passed 00:09:48.106 Test: test_band_block_offset_from_addr_offset ...passed 00:09:48.365 Test: test_band_addr_from_block_offset ...passed 00:09:48.365 Test: test_band_set_addr ...passed 00:09:48.365 Test: test_invalidate_addr ...passed 00:09:48.365 Test: test_next_xfer_addr ...passed 00:09:48.365 00:09:48.365 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.365 suites 1 1 n/a 0 0 00:09:48.365 tests 6 6 6 0 0 00:09:48.365 asserts 30356 30356 30356 0 n/a 00:09:48.365 00:09:48.365 Elapsed time = 0.180 seconds 00:09:48.365 12:29:30 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:09:48.365 00:09:48.365 00:09:48.365 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.365 http://cunit.sourceforge.net/ 00:09:48.365 00:09:48.365 00:09:48.365 Suite: ftl_bitmap 00:09:48.365 Test: test_ftl_bitmap_create ...[2024-10-01 12:29:30.837687] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:09:48.365 passed 00:09:48.365 Test: test_ftl_bitmap_get ...passed 00:09:48.365 Test: test_ftl_bitmap_set ...[2024-10-01 12:29:30.838007] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:09:48.365 passed 00:09:48.365 Test: test_ftl_bitmap_clear ...passed 00:09:48.365 Test: test_ftl_bitmap_find_first_set ...passed 00:09:48.365 Test: test_ftl_bitmap_find_first_clear ...passed 00:09:48.365 Test: test_ftl_bitmap_count_set ...passed 00:09:48.365 00:09:48.365 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.365 suites 1 1 n/a 0 0 00:09:48.365 tests 7 7 7 0 0 00:09:48.365 asserts 137 137 137 0 n/a 00:09:48.365 00:09:48.365 Elapsed time = 0.001 seconds 00:09:48.365 12:29:30 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:09:48.365 00:09:48.365 00:09:48.365 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.365 http://cunit.sourceforge.net/ 00:09:48.365 00:09:48.365 00:09:48.365 Suite: ftl_io_suite 00:09:48.365 Test: test_completion ...passed 00:09:48.365 Test: test_multiple_ios ...passed 00:09:48.365 00:09:48.365 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.365 suites 1 1 n/a 0 0 00:09:48.365 tests 2 2 2 0 0 00:09:48.365 asserts 47 47 47 0 n/a 00:09:48.365 00:09:48.365 Elapsed time = 0.003 seconds 00:09:48.625 12:29:30 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:09:48.625 00:09:48.625 00:09:48.625 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.625 http://cunit.sourceforge.net/ 00:09:48.625 00:09:48.625 00:09:48.625 Suite: ftl_mngt 00:09:48.625 Test: test_next_step ...passed 00:09:48.625 Test: test_continue_step ...passed 00:09:48.625 Test: test_get_func_and_step_cntx_alloc ...passed 00:09:48.625 Test: test_fail_step ...passed 00:09:48.625 Test: test_mngt_call_and_call_rollback ...passed 00:09:48.625 Test: test_nested_process_failure ...passed 00:09:48.625 00:09:48.625 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.625 suites 1 1 n/a 0 0 00:09:48.625 tests 6 6 6 0 0 00:09:48.625 asserts 176 176 176 0 n/a 00:09:48.625 00:09:48.625 Elapsed time = 0.002 seconds 00:09:48.625 12:29:30 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:09:48.625 00:09:48.625 00:09:48.625 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.625 http://cunit.sourceforge.net/ 00:09:48.625 00:09:48.625 00:09:48.625 Suite: ftl_mempool 00:09:48.625 Test: test_ftl_mempool_create ...passed 00:09:48.625 Test: test_ftl_mempool_get_put ...passed 00:09:48.625 00:09:48.625 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.625 suites 1 1 n/a 0 0 00:09:48.625 tests 2 2 2 0 0 00:09:48.625 asserts 36 36 36 0 n/a 00:09:48.625 00:09:48.625 Elapsed time = 0.000 seconds 00:09:48.625 12:29:30 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:09:48.625 00:09:48.625 00:09:48.625 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.625 http://cunit.sourceforge.net/ 00:09:48.625 00:09:48.625 00:09:48.625 Suite: ftl_addr64_suite 00:09:48.625 Test: test_addr_cached ...passed 00:09:48.625 00:09:48.625 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.625 suites 1 1 n/a 0 0 00:09:48.625 tests 1 1 1 0 0 00:09:48.625 asserts 1536 1536 1536 0 n/a 00:09:48.625 00:09:48.625 Elapsed time = 0.001 seconds 00:09:48.625 12:29:31 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:09:48.625 00:09:48.625 00:09:48.625 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.625 http://cunit.sourceforge.net/ 00:09:48.625 00:09:48.625 00:09:48.625 Suite: ftl_sb 00:09:48.625 Test: test_sb_crc_v2 ...passed 00:09:48.625 Test: test_sb_crc_v3 ...passed 00:09:48.625 Test: test_sb_v3_md_layout ...[2024-10-01 12:29:31.067478] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:09:48.625 [2024-10-01 12:29:31.067982] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:48.625 [2024-10-01 12:29:31.068071] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:48.625 [2024-10-01 12:29:31.068163] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:48.625 passed 00:09:48.625 Test: test_sb_v5_md_layout ...passed 00:09:48.625 00:09:48.625 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.625 suites 1 1 n/a 0 0 00:09:48.625 tests 4 4 4 0 0 00:09:48.625 asserts 148 148 148 0 n/a 00:09:48.625 00:09:48.625 Elapsed time = 0.003 seconds 00:09:48.625 [2024-10-01 12:29:31.068225] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:48.625 [2024-10-01 12:29:31.068380] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:09:48.625 [2024-10-01 12:29:31.068454] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:48.625 [2024-10-01 12:29:31.068545] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:48.625 [2024-10-01 12:29:31.068692] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:48.625 [2024-10-01 12:29:31.068780] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:48.625 [2024-10-01 12:29:31.068821] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:48.625 12:29:31 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:09:48.625 00:09:48.625 00:09:48.625 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.625 http://cunit.sourceforge.net/ 00:09:48.625 00:09:48.625 00:09:48.625 Suite: ftl_layout_upgrade 00:09:48.625 Test: test_l2p_upgrade ...passed 00:09:48.625 00:09:48.625 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.625 suites 1 1 n/a 0 0 00:09:48.625 tests 1 1 1 0 0 00:09:48.625 asserts 140 140 140 0 n/a 00:09:48.625 00:09:48.625 Elapsed time = 0.001 seconds 00:09:48.625 00:09:48.625 real 0m0.606s 00:09:48.625 user 0m0.235s 00:09:48.625 sys 0m0.375s 00:09:48.625 12:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.625 12:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.625 ************************************ 00:09:48.625 END TEST unittest_ftl 00:09:48.625 ************************************ 00:09:48.886 12:29:31 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:48.886 12:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:48.886 12:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:48.886 12:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.886 ************************************ 00:09:48.886 START TEST unittest_accel 00:09:48.886 ************************************ 00:09:48.886 12:29:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:48.886 00:09:48.886 00:09:48.886 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.886 http://cunit.sourceforge.net/ 00:09:48.886 00:09:48.886 00:09:48.886 Suite: accel_sequence 00:09:48.886 Test: test_sequence_fill_copy ...passed 00:09:48.886 Test: test_sequence_abort ...passed 00:09:48.886 Test: test_sequence_append_error ...passed 00:09:48.886 Test: test_sequence_completion_error ...[2024-10-01 12:29:31.254613] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f34dc9027c0 00:09:48.886 [2024-10-01 12:29:31.254894] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f34dc9027c0 00:09:48.886 [2024-10-01 12:29:31.254934] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f34dc9027c0 00:09:48.886 [2024-10-01 12:29:31.254977] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f34dc9027c0 00:09:48.886 passed 00:09:48.886 Test: test_sequence_decompress ...passed 00:09:48.886 Test: test_sequence_reverse ...passed 00:09:48.886 Test: test_sequence_copy_elision ...passed 00:09:48.886 Test: test_sequence_accel_buffers ...passed 00:09:48.886 Test: test_sequence_memory_domain ...[2024-10-01 12:29:31.264002] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:09:48.886 [2024-10-01 12:29:31.264152] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:09:48.886 passed 00:09:48.886 Test: test_sequence_module_memory_domain ...passed 00:09:48.886 Test: test_sequence_crypto ...passed 00:09:48.886 Test: test_sequence_driver ...[2024-10-01 12:29:31.269420] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f34dbcda7c0 using driver: ut 00:09:48.886 [2024-10-01 12:29:31.269512] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f34dbcda7c0 through driver: ut 00:09:48.886 passed 00:09:48.886 Test: test_sequence_same_iovs ...passed 00:09:48.886 Test: test_sequence_crc32 ...passed 00:09:48.886 Suite: accel 00:09:48.886 Test: test_spdk_accel_task_complete ...passed 00:09:48.886 Test: test_get_task ...passed 00:09:48.886 Test: test_spdk_accel_submit_copy ...passed 00:09:48.886 Test: test_spdk_accel_submit_dualcast ...passed 00:09:48.886 Test: test_spdk_accel_submit_compare ...passed 00:09:48.886 Test: test_spdk_accel_submit_fill ...passed 00:09:48.886 Test: test_spdk_accel_submit_crc32c ...passed 00:09:48.886 Test: test_spdk_accel_submit_crc32cv ...passed 00:09:48.886 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:09:48.886 Test: test_spdk_accel_submit_xor ...[2024-10-01 12:29:31.273411] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:48.886 [2024-10-01 12:29:31.273462] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:48.886 passed 00:09:48.886 Test: test_spdk_accel_module_find_by_name ...passed 00:09:48.886 Test: test_spdk_accel_module_register ...passed 00:09:48.886 00:09:48.886 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.886 suites 2 2 n/a 0 0 00:09:48.886 tests 26 26 26 0 0 00:09:48.886 asserts 831 831 831 0 n/a 00:09:48.886 00:09:48.886 Elapsed time = 0.029 seconds 00:09:48.886 00:09:48.886 real 0m0.085s 00:09:48.886 user 0m0.054s 00:09:48.886 sys 0m0.032s 00:09:48.886 12:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.886 12:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.886 ************************************ 00:09:48.886 END TEST unittest_accel 00:09:48.886 ************************************ 00:09:48.886 12:29:31 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:48.886 12:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:48.886 12:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:48.886 12:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.886 ************************************ 00:09:48.886 START TEST unittest_ioat 00:09:48.886 ************************************ 00:09:48.886 12:29:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:48.886 00:09:48.886 00:09:48.886 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.886 http://cunit.sourceforge.net/ 00:09:48.886 00:09:48.886 00:09:48.886 Suite: ioat 00:09:48.886 Test: ioat_state_check ...passed 00:09:48.886 00:09:48.886 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.886 suites 1 1 n/a 0 0 00:09:48.886 tests 1 1 1 0 0 00:09:48.886 asserts 32 32 32 0 n/a 00:09:48.886 00:09:48.886 Elapsed time = 0.000 seconds 00:09:49.146 00:09:49.146 real 0m0.047s 00:09:49.146 user 0m0.026s 00:09:49.146 sys 0m0.022s 00:09:49.146 12:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.146 12:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:49.146 ************************************ 00:09:49.146 END TEST unittest_ioat 00:09:49.146 ************************************ 00:09:49.146 12:29:31 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:49.146 12:29:31 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:49.146 12:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:49.146 12:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.146 12:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:49.146 ************************************ 00:09:49.146 START TEST unittest_idxd_user 00:09:49.146 ************************************ 00:09:49.146 12:29:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:49.146 00:09:49.146 00:09:49.146 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.146 http://cunit.sourceforge.net/ 00:09:49.146 00:09:49.146 00:09:49.146 Suite: idxd_user 00:09:49.146 Test: test_idxd_wait_cmd ...[2024-10-01 12:29:31.540489] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:49.146 [2024-10-01 12:29:31.540820] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:09:49.146 passed 00:09:49.146 Test: test_idxd_reset_dev ...passed 00:09:49.146 Test: test_idxd_group_config ...passed 00:09:49.146 Test: test_idxd_wq_config ...passed 00:09:49.146 00:09:49.146 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.146 suites 1 1 n/a 0 0 00:09:49.146 tests 4 4 4 0 0 00:09:49.146 asserts 20 20 20 0 n/a 00:09:49.146 00:09:49.146 Elapsed time = 0.001 seconds 00:09:49.146 [2024-10-01 12:29:31.540986] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:49.146 [2024-10-01 12:29:31.541047] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:09:49.146 00:09:49.146 real 0m0.048s 00:09:49.146 user 0m0.036s 00:09:49.146 sys 0m0.013s 00:09:49.146 12:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.146 12:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:49.146 ************************************ 00:09:49.146 END TEST unittest_idxd_user 00:09:49.146 ************************************ 00:09:49.146 12:29:31 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:09:49.146 12:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:49.146 12:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.146 12:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:49.146 ************************************ 00:09:49.146 START TEST unittest_iscsi 00:09:49.146 ************************************ 00:09:49.146 12:29:31 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:09:49.146 12:29:31 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:09:49.146 00:09:49.146 00:09:49.146 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.146 http://cunit.sourceforge.net/ 00:09:49.146 00:09:49.146 00:09:49.146 Suite: conn_suite 00:09:49.146 Test: read_task_split_in_order_case ...passed 00:09:49.146 Test: read_task_split_reverse_order_case ...passed 00:09:49.146 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:09:49.146 Test: process_non_read_task_completion_test ...passed 00:09:49.146 Test: free_tasks_on_connection ...passed 00:09:49.146 Test: free_tasks_with_queued_datain ...passed 00:09:49.146 Test: abort_queued_datain_task_test ...passed 00:09:49.146 Test: abort_queued_datain_tasks_test ...passed 00:09:49.146 00:09:49.146 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.146 suites 1 1 n/a 0 0 00:09:49.146 tests 8 8 8 0 0 00:09:49.146 asserts 230 230 230 0 n/a 00:09:49.146 00:09:49.146 Elapsed time = 0.001 seconds 00:09:49.409 12:29:31 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:09:49.409 00:09:49.409 00:09:49.409 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.409 http://cunit.sourceforge.net/ 00:09:49.409 00:09:49.409 00:09:49.409 Suite: iscsi_suite 00:09:49.409 Test: param_negotiation_test ...passed 00:09:49.409 Test: list_negotiation_test ...passed 00:09:49.409 Test: parse_valid_test ...passed 00:09:49.409 Test: parse_invalid_test ...[2024-10-01 12:29:31.723566] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:09:49.409 [2024-10-01 12:29:31.723904] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:09:49.409 [2024-10-01 12:29:31.723971] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:09:49.409 [2024-10-01 12:29:31.724049] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:09:49.409 passed 00:09:49.409 00:09:49.409 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.409 suites 1 1 n/a 0 0 00:09:49.409 tests 4 4 4 0 0 00:09:49.409 asserts 161 161 161 0 n/a 00:09:49.409 00:09:49.409 Elapsed time = 0.006 seconds 00:09:49.409 [2024-10-01 12:29:31.724216] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:09:49.409 [2024-10-01 12:29:31.724291] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:09:49.409 [2024-10-01 12:29:31.724432] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:09:49.409 12:29:31 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:09:49.409 00:09:49.409 00:09:49.409 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.409 http://cunit.sourceforge.net/ 00:09:49.409 00:09:49.409 00:09:49.409 Suite: iscsi_target_node_suite 00:09:49.409 Test: add_lun_test_cases ...[2024-10-01 12:29:31.772559] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:09:49.409 [2024-10-01 12:29:31.772891] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:09:49.409 passed 00:09:49.409 Test: allow_any_allowed ...passed 00:09:49.409 Test: allow_ipv6_allowed ...passed 00:09:49.409 Test: allow_ipv6_denied ...passed 00:09:49.409 Test: allow_ipv6_invalid ...passed 00:09:49.409 Test: allow_ipv4_allowed ...passed 00:09:49.409 Test: allow_ipv4_denied ...passed 00:09:49.409 Test: allow_ipv4_invalid ...passed 00:09:49.409 Test: node_access_allowed ...[2024-10-01 12:29:31.772985] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:49.409 [2024-10-01 12:29:31.773029] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:49.409 [2024-10-01 12:29:31.773064] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:09:49.409 passed 00:09:49.409 Test: node_access_denied_by_empty_netmask ...passed 00:09:49.409 Test: node_access_multi_initiator_groups_cases ...passed 00:09:49.409 Test: allow_iscsi_name_multi_maps_case ...passed 00:09:49.409 Test: chap_param_test_cases ...[2024-10-01 12:29:31.773485] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:09:49.409 [2024-10-01 12:29:31.773526] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:09:49.409 [2024-10-01 12:29:31.773587] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:09:49.409 [2024-10-01 12:29:31.773623] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:09:49.409 [2024-10-01 12:29:31.773666] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:09:49.409 passed 00:09:49.409 00:09:49.409 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.409 suites 1 1 n/a 0 0 00:09:49.409 tests 13 13 13 0 0 00:09:49.409 asserts 50 50 50 0 n/a 00:09:49.409 00:09:49.409 Elapsed time = 0.001 seconds 00:09:49.409 12:29:31 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:09:49.409 00:09:49.409 00:09:49.409 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.409 http://cunit.sourceforge.net/ 00:09:49.409 00:09:49.409 00:09:49.409 Suite: iscsi_suite 00:09:49.409 Test: op_login_check_target_test ...passed 00:09:49.409 Test: op_login_session_normal_test ...[2024-10-01 12:29:31.830217] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:09:49.409 [2024-10-01 12:29:31.830609] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:49.409 [2024-10-01 12:29:31.830664] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:49.409 [2024-10-01 12:29:31.830713] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:49.409 [2024-10-01 12:29:31.830781] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:09:49.409 [2024-10-01 12:29:31.830909] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:49.409 [2024-10-01 12:29:31.831023] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:09:49.409 [2024-10-01 12:29:31.831094] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:49.409 passed 00:09:49.409 Test: maxburstlength_test ...passed 00:09:49.409 Test: underflow_for_read_transfer_test ...passed 00:09:49.409 Test: underflow_for_zero_read_transfer_test ...passed 00:09:49.409 Test: underflow_for_request_sense_test ...[2024-10-01 12:29:31.831352] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:49.409 [2024-10-01 12:29:31.831420] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:09:49.409 passed 00:09:49.409 Test: underflow_for_check_condition_test ...passed 00:09:49.409 Test: add_transfer_task_test ...passed 00:09:49.409 Test: get_transfer_task_test ...passed 00:09:49.409 Test: del_transfer_task_test ...passed 00:09:49.409 Test: clear_all_transfer_tasks_test ...passed 00:09:49.409 Test: build_iovs_test ...passed 00:09:49.409 Test: build_iovs_with_md_test ...passed 00:09:49.409 Test: pdu_hdr_op_login_test ...[2024-10-01 12:29:31.833042] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:09:49.409 [2024-10-01 12:29:31.833172] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:09:49.409 [2024-10-01 12:29:31.833285] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:09:49.409 passed 00:09:49.409 Test: pdu_hdr_op_text_test ...[2024-10-01 12:29:31.833388] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:49.409 [2024-10-01 12:29:31.833484] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:09:49.409 [2024-10-01 12:29:31.833541] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:09:49.409 passed 00:09:49.409 Test: pdu_hdr_op_logout_test ...[2024-10-01 12:29:31.833625] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:09:49.409 passed 00:09:49.409 Test: pdu_hdr_op_scsi_test ...[2024-10-01 12:29:31.833813] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:49.409 [2024-10-01 12:29:31.833857] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:49.409 [2024-10-01 12:29:31.833921] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:09:49.409 [2024-10-01 12:29:31.834032] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:49.409 [2024-10-01 12:29:31.834139] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:09:49.409 [2024-10-01 12:29:31.834361] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:09:49.409 passed 00:09:49.409 Test: pdu_hdr_op_task_mgmt_test ...[2024-10-01 12:29:31.834496] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:09:49.409 [2024-10-01 12:29:31.834576] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:09:49.409 passed 00:09:49.409 Test: pdu_hdr_op_nopout_test ...[2024-10-01 12:29:31.834831] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:09:49.409 [2024-10-01 12:29:31.834925] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:49.409 [2024-10-01 12:29:31.834966] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:49.409 passed 00:09:49.409 Test: pdu_hdr_op_data_test ...[2024-10-01 12:29:31.835013] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:09:49.409 [2024-10-01 12:29:31.835057] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:09:49.409 [2024-10-01 12:29:31.835149] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:49.409 [2024-10-01 12:29:31.835227] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:49.410 [2024-10-01 12:29:31.835292] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:09:49.410 [2024-10-01 12:29:31.835354] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:09:49.410 [2024-10-01 12:29:31.835436] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:09:49.410 passed 00:09:49.410 Test: empty_text_with_cbit_test ...[2024-10-01 12:29:31.835486] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:09:49.410 passed 00:09:49.410 Test: pdu_payload_read_test ...[2024-10-01 12:29:31.837339] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:09:49.410 passed 00:09:49.410 Test: data_out_pdu_sequence_test ...passed 00:09:49.410 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:09:49.410 00:09:49.410 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.410 suites 1 1 n/a 0 0 00:09:49.410 tests 24 24 24 0 0 00:09:49.410 asserts 150253 150253 150253 0 n/a 00:09:49.410 00:09:49.410 Elapsed time = 0.013 seconds 00:09:49.410 12:29:31 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:09:49.410 00:09:49.410 00:09:49.410 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.410 http://cunit.sourceforge.net/ 00:09:49.410 00:09:49.410 00:09:49.410 Suite: init_grp_suite 00:09:49.410 Test: create_initiator_group_success_case ...passed 00:09:49.410 Test: find_initiator_group_success_case ...passed 00:09:49.410 Test: register_initiator_group_twice_case ...passed 00:09:49.410 Test: add_initiator_name_success_case ...passed 00:09:49.410 Test: add_initiator_name_fail_case ...[2024-10-01 12:29:31.898981] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:09:49.410 passed 00:09:49.410 Test: delete_all_initiator_names_success_case ...passed 00:09:49.410 Test: add_netmask_success_case ...passed 00:09:49.410 Test: add_netmask_fail_case ...passed 00:09:49.410 Test: delete_all_netmasks_success_case ...[2024-10-01 12:29:31.899645] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:09:49.410 passed 00:09:49.410 Test: initiator_name_overwrite_all_to_any_case ...passed 00:09:49.410 Test: netmask_overwrite_all_to_any_case ...passed 00:09:49.410 Test: add_delete_initiator_names_case ...passed 00:09:49.410 Test: add_duplicated_initiator_names_case ...passed 00:09:49.410 Test: delete_nonexisting_initiator_names_case ...passed 00:09:49.410 Test: add_delete_netmasks_case ...passed 00:09:49.410 Test: add_duplicated_netmasks_case ...passed 00:09:49.410 Test: delete_nonexisting_netmasks_case ...passed 00:09:49.410 00:09:49.410 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.410 suites 1 1 n/a 0 0 00:09:49.410 tests 17 17 17 0 0 00:09:49.410 asserts 108 108 108 0 n/a 00:09:49.410 00:09:49.410 Elapsed time = 0.002 seconds 00:09:49.410 12:29:31 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:09:49.669 00:09:49.669 00:09:49.669 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.669 http://cunit.sourceforge.net/ 00:09:49.669 00:09:49.669 00:09:49.669 Suite: portal_grp_suite 00:09:49.669 Test: portal_create_ipv4_normal_case ...passed 00:09:49.669 Test: portal_create_ipv6_normal_case ...passed 00:09:49.669 Test: portal_create_ipv4_wildcard_case ...passed 00:09:49.669 Test: portal_create_ipv6_wildcard_case ...passed 00:09:49.669 Test: portal_create_twice_case ...[2024-10-01 12:29:31.953548] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:09:49.669 passed 00:09:49.669 Test: portal_grp_register_unregister_case ...passed 00:09:49.669 Test: portal_grp_register_twice_case ...passed 00:09:49.669 Test: portal_grp_add_delete_case ...passed 00:09:49.669 Test: portal_grp_add_delete_twice_case ...passed 00:09:49.669 00:09:49.669 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.669 suites 1 1 n/a 0 0 00:09:49.669 tests 9 9 9 0 0 00:09:49.669 asserts 44 44 44 0 n/a 00:09:49.669 00:09:49.669 Elapsed time = 0.004 seconds 00:09:49.669 00:09:49.669 real 0m0.338s 00:09:49.669 user 0m0.175s 00:09:49.669 sys 0m0.166s 00:09:49.669 12:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.669 12:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:49.669 ************************************ 00:09:49.669 END TEST unittest_iscsi 00:09:49.669 ************************************ 00:09:49.669 12:29:32 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:09:49.669 12:29:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:49.669 12:29:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.669 12:29:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.669 ************************************ 00:09:49.669 START TEST unittest_json 00:09:49.669 ************************************ 00:09:49.669 12:29:32 -- common/autotest_common.sh@1104 -- # unittest_json 00:09:49.669 12:29:32 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:09:49.669 00:09:49.669 00:09:49.669 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.669 http://cunit.sourceforge.net/ 00:09:49.669 00:09:49.669 00:09:49.669 Suite: json 00:09:49.669 Test: test_parse_literal ...passed 00:09:49.669 Test: test_parse_string_simple ...passed 00:09:49.669 Test: test_parse_string_control_chars ...passed 00:09:49.669 Test: test_parse_string_utf8 ...passed 00:09:49.669 Test: test_parse_string_escapes_twochar ...passed 00:09:49.669 Test: test_parse_string_escapes_unicode ...passed 00:09:49.669 Test: test_parse_number ...passed 00:09:49.669 Test: test_parse_array ...passed 00:09:49.669 Test: test_parse_object ...passed 00:09:49.669 Test: test_parse_nesting ...passed 00:09:49.669 Test: test_parse_comment ...passed 00:09:49.669 00:09:49.669 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.669 suites 1 1 n/a 0 0 00:09:49.669 tests 11 11 11 0 0 00:09:49.669 asserts 1516 1516 1516 0 n/a 00:09:49.669 00:09:49.669 Elapsed time = 0.002 seconds 00:09:49.669 12:29:32 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:09:49.669 00:09:49.669 00:09:49.669 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.669 http://cunit.sourceforge.net/ 00:09:49.669 00:09:49.669 00:09:49.669 Suite: json 00:09:49.669 Test: test_strequal ...passed 00:09:49.669 Test: test_num_to_uint16 ...passed 00:09:49.669 Test: test_num_to_int32 ...passed 00:09:49.669 Test: test_num_to_uint64 ...passed 00:09:49.669 Test: test_decode_object ...passed 00:09:49.669 Test: test_decode_array ...passed 00:09:49.669 Test: test_decode_bool ...passed 00:09:49.669 Test: test_decode_uint16 ...passed 00:09:49.669 Test: test_decode_int32 ...passed 00:09:49.669 Test: test_decode_uint32 ...passed 00:09:49.669 Test: test_decode_uint64 ...passed 00:09:49.669 Test: test_decode_string ...passed 00:09:49.669 Test: test_decode_uuid ...passed 00:09:49.669 Test: test_find ...passed 00:09:49.669 Test: test_find_array ...passed 00:09:49.669 Test: test_iterating ...passed 00:09:49.669 Test: test_free_object ...passed 00:09:49.669 00:09:49.669 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.669 suites 1 1 n/a 0 0 00:09:49.669 tests 17 17 17 0 0 00:09:49.669 asserts 236 236 236 0 n/a 00:09:49.669 00:09:49.669 Elapsed time = 0.003 seconds 00:09:49.669 12:29:32 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:09:49.669 00:09:49.669 00:09:49.669 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.669 http://cunit.sourceforge.net/ 00:09:49.669 00:09:49.669 00:09:49.669 Suite: json 00:09:49.669 Test: test_write_literal ...passed 00:09:49.669 Test: test_write_string_simple ...passed 00:09:49.669 Test: test_write_string_escapes ...passed 00:09:49.669 Test: test_write_string_utf16le ...passed 00:09:49.669 Test: test_write_number_int32 ...passed 00:09:49.928 Test: test_write_number_uint32 ...passed 00:09:49.928 Test: test_write_number_uint128 ...passed 00:09:49.928 Test: test_write_string_number_uint128 ...passed 00:09:49.928 Test: test_write_number_int64 ...passed 00:09:49.928 Test: test_write_number_uint64 ...passed 00:09:49.928 Test: test_write_number_double ...passed 00:09:49.928 Test: test_write_uuid ...passed 00:09:49.928 Test: test_write_array ...passed 00:09:49.928 Test: test_write_object ...passed 00:09:49.928 Test: test_write_nesting ...passed 00:09:49.928 Test: test_write_val ...passed 00:09:49.928 00:09:49.928 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.928 suites 1 1 n/a 0 0 00:09:49.928 tests 16 16 16 0 0 00:09:49.928 asserts 918 918 918 0 n/a 00:09:49.928 00:09:49.928 Elapsed time = 0.007 seconds 00:09:49.928 12:29:32 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:09:49.928 00:09:49.928 00:09:49.928 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.928 http://cunit.sourceforge.net/ 00:09:49.928 00:09:49.928 00:09:49.928 Suite: jsonrpc 00:09:49.928 Test: test_parse_request ...passed 00:09:49.928 Test: test_parse_request_streaming ...passed 00:09:49.928 00:09:49.928 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.928 suites 1 1 n/a 0 0 00:09:49.928 tests 2 2 2 0 0 00:09:49.928 asserts 289 289 289 0 n/a 00:09:49.928 00:09:49.928 Elapsed time = 0.006 seconds 00:09:49.928 00:09:49.928 real 0m0.222s 00:09:49.928 user 0m0.121s 00:09:49.928 sys 0m0.103s 00:09:49.928 12:29:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.928 ************************************ 00:09:49.928 END TEST unittest_json 00:09:49.928 12:29:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.928 ************************************ 00:09:49.928 12:29:32 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:09:49.929 12:29:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:49.929 12:29:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.929 12:29:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.929 ************************************ 00:09:49.929 START TEST unittest_rpc 00:09:49.929 ************************************ 00:09:49.929 12:29:32 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:09:49.929 12:29:32 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:09:49.929 00:09:49.929 00:09:49.929 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.929 http://cunit.sourceforge.net/ 00:09:49.929 00:09:49.929 00:09:49.929 Suite: rpc 00:09:49.929 Test: test_jsonrpc_handler ...passed 00:09:49.929 Test: test_spdk_rpc_is_method_allowed ...passed 00:09:49.929 Test: test_rpc_get_methods ...passed 00:09:49.929 Test: test_rpc_spdk_get_version ...passed 00:09:49.929 Test: test_spdk_rpc_listen_close ...passed 00:09:49.929 00:09:49.929 [2024-10-01 12:29:32.390904] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:09:49.929 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.929 suites 1 1 n/a 0 0 00:09:49.929 tests 5 5 5 0 0 00:09:49.929 asserts 20 20 20 0 n/a 00:09:49.929 00:09:49.929 Elapsed time = 0.001 seconds 00:09:49.929 00:09:49.929 real 0m0.049s 00:09:49.929 user 0m0.013s 00:09:49.929 sys 0m0.036s 00:09:49.929 12:29:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.929 12:29:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.929 ************************************ 00:09:49.929 END TEST unittest_rpc 00:09:49.929 ************************************ 00:09:50.188 12:29:32 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:50.188 12:29:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:50.188 12:29:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:50.189 12:29:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.189 ************************************ 00:09:50.189 START TEST unittest_notify 00:09:50.189 ************************************ 00:09:50.189 12:29:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:50.189 00:09:50.189 00:09:50.189 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.189 http://cunit.sourceforge.net/ 00:09:50.189 00:09:50.189 00:09:50.189 Suite: app_suite 00:09:50.189 Test: notify ...passed 00:09:50.189 00:09:50.189 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.189 suites 1 1 n/a 0 0 00:09:50.189 tests 1 1 1 0 0 00:09:50.189 asserts 13 13 13 0 n/a 00:09:50.189 00:09:50.189 Elapsed time = 0.000 seconds 00:09:50.189 00:09:50.189 real 0m0.048s 00:09:50.189 user 0m0.026s 00:09:50.189 sys 0m0.022s 00:09:50.189 12:29:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.189 ************************************ 00:09:50.189 END TEST unittest_notify 00:09:50.189 ************************************ 00:09:50.189 12:29:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.189 12:29:32 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:09:50.189 12:29:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:50.189 12:29:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:50.189 12:29:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.189 ************************************ 00:09:50.189 START TEST unittest_nvme 00:09:50.189 ************************************ 00:09:50.189 12:29:32 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:09:50.189 12:29:32 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:09:50.189 00:09:50.189 00:09:50.189 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.189 http://cunit.sourceforge.net/ 00:09:50.189 00:09:50.189 00:09:50.189 Suite: nvme 00:09:50.189 Test: test_opc_data_transfer ...passed 00:09:50.189 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:09:50.189 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:09:50.189 Test: test_trid_parse_and_compare ...[2024-10-01 12:29:32.637977] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:09:50.189 [2024-10-01 12:29:32.638447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:50.189 [2024-10-01 12:29:32.638605] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:09:50.189 [2024-10-01 12:29:32.638673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:50.189 [2024-10-01 12:29:32.638730] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:09:50.189 [2024-10-01 12:29:32.638887] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:50.189 passed 00:09:50.189 Test: test_trid_trtype_str ...passed 00:09:50.189 Test: test_trid_adrfam_str ...passed 00:09:50.189 Test: test_nvme_ctrlr_probe ...[2024-10-01 12:29:32.639190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:50.189 passed 00:09:50.189 Test: test_spdk_nvme_probe ...[2024-10-01 12:29:32.639355] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:50.189 [2024-10-01 12:29:32.639412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:50.189 [2024-10-01 12:29:32.639566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:09:50.189 passed 00:09:50.189 Test: test_spdk_nvme_connect ...[2024-10-01 12:29:32.639633] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:50.189 [2024-10-01 12:29:32.639733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:09:50.189 [2024-10-01 12:29:32.640165] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:50.189 passed 00:09:50.189 Test: test_nvme_ctrlr_probe_internal ...[2024-10-01 12:29:32.640237] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:09:50.189 [2024-10-01 12:29:32.640379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:50.189 [2024-10-01 12:29:32.640421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:09:50.189 passed 00:09:50.189 Test: test_nvme_init_controllers ...[2024-10-01 12:29:32.640521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:09:50.189 passed 00:09:50.189 Test: test_nvme_driver_init ...[2024-10-01 12:29:32.640640] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:09:50.189 [2024-10-01 12:29:32.640685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:50.448 [2024-10-01 12:29:32.749799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:09:50.448 passed 00:09:50.448 Test: test_spdk_nvme_detach ...passed 00:09:50.448 Test: test_nvme_completion_poll_cb ...[2024-10-01 12:29:32.749989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:09:50.448 passed 00:09:50.448 Test: test_nvme_user_copy_cmd_complete ...passed 00:09:50.448 Test: test_nvme_allocate_request_null ...passed 00:09:50.448 Test: test_nvme_allocate_request ...passed 00:09:50.448 Test: test_nvme_free_request ...passed 00:09:50.448 Test: test_nvme_allocate_request_user_copy ...passed 00:09:50.448 Test: test_nvme_robust_mutex_init_shared ...passed 00:09:50.448 Test: test_nvme_request_check_timeout ...passed 00:09:50.448 Test: test_nvme_wait_for_completion ...passed 00:09:50.448 Test: test_spdk_nvme_parse_func ...passed 00:09:50.448 Test: test_spdk_nvme_detach_async ...passed 00:09:50.448 Test: test_nvme_parse_addr ...[2024-10-01 12:29:32.751141] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:09:50.448 passed 00:09:50.448 00:09:50.448 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.448 suites 1 1 n/a 0 0 00:09:50.448 tests 25 25 25 0 0 00:09:50.448 asserts 326 326 326 0 n/a 00:09:50.448 00:09:50.448 Elapsed time = 0.007 seconds 00:09:50.448 12:29:32 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:09:50.448 00:09:50.448 00:09:50.448 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.448 http://cunit.sourceforge.net/ 00:09:50.448 00:09:50.448 00:09:50.448 Suite: nvme_ctrlr 00:09:50.448 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-10-01 12:29:32.809034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 passed 00:09:50.448 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-10-01 12:29:32.811284] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 passed 00:09:50.448 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-10-01 12:29:32.812627] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 passed 00:09:50.448 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-10-01 12:29:32.813885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 passed 00:09:50.448 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-10-01 12:29:32.815181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 [2024-10-01 12:29:32.816350] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-10-01 12:29:32.817571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-10-01 12:29:32.818739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:50.448 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-10-01 12:29:32.821121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 [2024-10-01 12:29:32.823395] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-10-01 12:29:32.824593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:50.448 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-10-01 12:29:32.827083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 [2024-10-01 12:29:32.828288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-10-01 12:29:32.830616] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:50.448 Test: test_nvme_ctrlr_init_delay ...[2024-10-01 12:29:32.833078] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 passed 00:09:50.448 Test: test_alloc_io_qpair_rr_1 ...[2024-10-01 12:29:32.834517] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 [2024-10-01 12:29:32.834744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:50.448 [2024-10-01 12:29:32.835027] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:50.448 [2024-10-01 12:29:32.835170] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:50.448 [2024-10-01 12:29:32.835273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:50.448 passed 00:09:50.448 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:09:50.448 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:09:50.448 Test: test_alloc_io_qpair_wrr_1 ...[2024-10-01 12:29:32.835521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 passed 00:09:50.448 Test: test_alloc_io_qpair_wrr_2 ...[2024-10-01 12:29:32.835866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.448 [2024-10-01 12:29:32.836119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:50.448 passed 00:09:50.448 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-10-01 12:29:32.836673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:09:50.448 [2024-10-01 12:29:32.836968] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:50.448 [2024-10-01 12:29:32.837183] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:09:50.449 [2024-10-01 12:29:32.837330] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:50.449 passed 00:09:50.449 Test: test_nvme_ctrlr_fail ...[2024-10-01 12:29:32.837505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:09:50.449 passed 00:09:50.449 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:09:50.449 Test: test_nvme_ctrlr_set_supported_features ...passed 00:09:50.449 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:09:50.449 Test: test_nvme_ctrlr_test_active_ns ...[2024-10-01 12:29:32.838051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:09:50.708 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:09:50.708 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:09:50.708 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-10-01 12:29:33.075278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-10-01 12:29:33.082188] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-10-01 12:29:33.083365] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 [2024-10-01 12:29:33.083422] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:09:50.708 passed 00:09:50.708 Test: test_alloc_io_qpair_fail ...[2024-10-01 12:29:33.084553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_add_remove_process ...passed 00:09:50.708 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:09:50.708 Test: test_nvme_ctrlr_set_state ...[2024-10-01 12:29:33.084655] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-10-01 12:29:33.084784] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:09:50.708 [2024-10-01 12:29:33.084832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-10-01 12:29:33.105847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_ns_mgmt ...[2024-10-01 12:29:33.143773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_reset ...[2024-10-01 12:29:33.145331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_aer_callback ...[2024-10-01 12:29:33.145688] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-10-01 12:29:33.147071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:09:50.708 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:09:50.708 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-10-01 12:29:33.148751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:09:50.708 Test: test_nvme_ctrlr_ana_resize ...[2024-10-01 12:29:33.150089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:09:50.708 Test: test_nvme_transport_ctrlr_ready ...[2024-10-01 12:29:33.151601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:09:50.708 passed 00:09:50.708 Test: test_nvme_ctrlr_disable ...[2024-10-01 12:29:33.151646] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:09:50.708 [2024-10-01 12:29:33.151687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:50.708 passed 00:09:50.708 00:09:50.708 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.708 suites 1 1 n/a 0 0 00:09:50.708 tests 43 43 43 0 0 00:09:50.708 asserts 10418 10418 10418 0 n/a 00:09:50.708 00:09:50.708 Elapsed time = 0.303 seconds 00:09:50.708 12:29:33 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:09:50.708 00:09:50.708 00:09:50.708 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.708 http://cunit.sourceforge.net/ 00:09:50.708 00:09:50.708 00:09:50.708 Suite: nvme_ctrlr_cmd 00:09:50.708 Test: test_get_log_pages ...passed 00:09:50.708 Test: test_set_feature_cmd ...passed 00:09:50.708 Test: test_set_feature_ns_cmd ...passed 00:09:50.708 Test: test_get_feature_cmd ...passed 00:09:50.708 Test: test_get_feature_ns_cmd ...passed 00:09:50.708 Test: test_abort_cmd ...passed 00:09:50.708 Test: test_set_host_id_cmds ...[2024-10-01 12:29:33.212727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:09:50.708 passed 00:09:50.708 Test: test_io_cmd_raw_no_payload_build ...passed 00:09:50.708 Test: test_io_raw_cmd ...passed 00:09:50.708 Test: test_io_raw_cmd_with_md ...passed 00:09:50.708 Test: test_namespace_attach ...passed 00:09:50.708 Test: test_namespace_detach ...passed 00:09:50.708 Test: test_namespace_create ...passed 00:09:50.708 Test: test_namespace_delete ...passed 00:09:50.708 Test: test_doorbell_buffer_config ...passed 00:09:50.708 Test: test_format_nvme ...passed 00:09:50.708 Test: test_fw_commit ...passed 00:09:50.708 Test: test_fw_image_download ...passed 00:09:50.708 Test: test_sanitize ...passed 00:09:50.708 Test: test_directive ...passed 00:09:50.708 Test: test_nvme_request_add_abort ...passed 00:09:50.708 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:09:50.708 Test: test_nvme_ctrlr_cmd_identify ...passed 00:09:50.708 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:09:50.708 00:09:50.708 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.708 suites 1 1 n/a 0 0 00:09:50.708 tests 24 24 24 0 0 00:09:50.708 asserts 198 198 198 0 n/a 00:09:50.708 00:09:50.708 Elapsed time = 0.001 seconds 00:09:50.968 12:29:33 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:09:50.968 00:09:50.968 00:09:50.968 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.968 http://cunit.sourceforge.net/ 00:09:50.968 00:09:50.968 00:09:50.968 Suite: nvme_ctrlr_cmd 00:09:50.968 Test: test_geometry_cmd ...passed 00:09:50.968 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:09:50.968 00:09:50.968 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.968 suites 1 1 n/a 0 0 00:09:50.968 tests 2 2 2 0 0 00:09:50.968 asserts 7 7 7 0 n/a 00:09:50.968 00:09:50.968 Elapsed time = 0.000 seconds 00:09:50.968 12:29:33 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:09:50.968 00:09:50.968 00:09:50.968 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.968 http://cunit.sourceforge.net/ 00:09:50.968 00:09:50.968 00:09:50.968 Suite: nvme 00:09:50.968 Test: test_nvme_ns_construct ...passed 00:09:50.968 Test: test_nvme_ns_uuid ...passed 00:09:50.968 Test: test_nvme_ns_csi ...passed 00:09:50.968 Test: test_nvme_ns_data ...passed 00:09:50.968 Test: test_nvme_ns_set_identify_data ...passed 00:09:50.968 Test: test_spdk_nvme_ns_get_values ...passed 00:09:50.968 Test: test_spdk_nvme_ns_is_active ...passed 00:09:50.968 Test: spdk_nvme_ns_supports ...passed 00:09:50.968 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:09:50.968 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:09:50.968 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:09:50.968 Test: test_nvme_ns_find_id_desc ...passed 00:09:50.968 00:09:50.968 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.968 suites 1 1 n/a 0 0 00:09:50.968 tests 12 12 12 0 0 00:09:50.968 asserts 83 83 83 0 n/a 00:09:50.968 00:09:50.968 Elapsed time = 0.000 seconds 00:09:50.968 12:29:33 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:09:50.968 00:09:50.968 00:09:50.968 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.968 http://cunit.sourceforge.net/ 00:09:50.968 00:09:50.968 00:09:50.968 Suite: nvme_ns_cmd 00:09:50.968 Test: split_test ...passed 00:09:50.968 Test: split_test2 ...passed 00:09:50.968 Test: split_test3 ...passed 00:09:50.968 Test: split_test4 ...passed 00:09:50.968 Test: test_nvme_ns_cmd_flush ...passed 00:09:50.968 Test: test_nvme_ns_cmd_dataset_management ...passed 00:09:50.968 Test: test_nvme_ns_cmd_copy ...passed 00:09:50.968 Test: test_io_flags ...[2024-10-01 12:29:33.323759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:09:50.968 passed 00:09:50.968 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:09:50.968 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:09:50.968 Test: test_nvme_ns_cmd_reservation_register ...passed 00:09:50.968 Test: test_nvme_ns_cmd_reservation_release ...passed 00:09:50.968 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:09:50.968 Test: test_nvme_ns_cmd_reservation_report ...passed 00:09:50.968 Test: test_cmd_child_request ...passed 00:09:50.968 Test: test_nvme_ns_cmd_readv ...passed 00:09:50.968 Test: test_nvme_ns_cmd_read_with_md ...passed 00:09:50.968 Test: test_nvme_ns_cmd_writev ...passed 00:09:50.968 Test: test_nvme_ns_cmd_write_with_md ...[2024-10-01 12:29:33.324978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:09:50.968 passed 00:09:50.968 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:09:50.968 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:09:50.968 Test: test_nvme_ns_cmd_comparev ...passed 00:09:50.968 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:09:50.968 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:09:50.968 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:09:50.968 Test: test_nvme_ns_cmd_setup_request ...passed 00:09:50.968 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:09:50.968 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:09:50.968 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-10-01 12:29:33.326719] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:50.968 [2024-10-01 12:29:33.326815] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:50.968 passed 00:09:50.968 Test: test_nvme_ns_cmd_verify ...passed 00:09:50.968 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:09:50.968 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:09:50.968 00:09:50.968 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.968 suites 1 1 n/a 0 0 00:09:50.968 tests 32 32 32 0 0 00:09:50.968 asserts 550 550 550 0 n/a 00:09:50.968 00:09:50.968 Elapsed time = 0.004 seconds 00:09:50.968 12:29:33 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:09:50.968 00:09:50.968 00:09:50.968 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.968 http://cunit.sourceforge.net/ 00:09:50.968 00:09:50.968 00:09:50.968 Suite: nvme_ns_cmd 00:09:50.968 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:09:50.968 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:09:50.969 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:09:50.969 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:09:50.969 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:09:50.969 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:09:50.969 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:09:50.969 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:09:50.969 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:09:50.969 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:09:50.969 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:09:50.969 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:09:50.969 00:09:50.969 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.969 suites 1 1 n/a 0 0 00:09:50.969 tests 12 12 12 0 0 00:09:50.969 asserts 123 123 123 0 n/a 00:09:50.969 00:09:50.969 Elapsed time = 0.002 seconds 00:09:50.969 12:29:33 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:09:50.969 00:09:50.969 00:09:50.969 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.969 http://cunit.sourceforge.net/ 00:09:50.969 00:09:50.969 00:09:50.969 Suite: nvme_qpair 00:09:50.969 Test: test3 ...passed 00:09:50.969 Test: test_ctrlr_failed ...passed 00:09:50.969 Test: struct_packing ...passed 00:09:50.969 Test: test_nvme_qpair_process_completions ...[2024-10-01 12:29:33.421344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:50.969 [2024-10-01 12:29:33.421796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:50.969 [2024-10-01 12:29:33.421870] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:50.969 passed 00:09:50.969 Test: test_nvme_completion_is_retry ...passed 00:09:50.969 Test: test_get_status_string ...passed 00:09:50.969 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-10-01 12:29:33.422003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:09:50.969 passed 00:09:50.969 Test: test_nvme_qpair_submit_request ...passed 00:09:50.969 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:09:50.969 Test: test_nvme_qpair_manual_complete_request ...passed 00:09:50.969 Test: test_nvme_qpair_init_deinit ...passed 00:09:50.969 Test: test_nvme_get_sgl_print_info ...passed 00:09:50.969 00:09:50.969 [2024-10-01 12:29:33.422556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:50.969 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.969 suites 1 1 n/a 0 0 00:09:50.969 tests 12 12 12 0 0 00:09:50.969 asserts 154 154 154 0 n/a 00:09:50.969 00:09:50.969 Elapsed time = 0.002 seconds 00:09:50.969 12:29:33 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:09:50.969 00:09:50.969 00:09:50.969 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.969 http://cunit.sourceforge.net/ 00:09:50.969 00:09:50.969 00:09:50.969 Suite: nvme_pcie 00:09:50.969 Test: test_prp_list_append ...[2024-10-01 12:29:33.477097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:50.969 [2024-10-01 12:29:33.477637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:09:50.969 [2024-10-01 12:29:33.477737] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:09:50.969 [2024-10-01 12:29:33.478128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:50.969 [2024-10-01 12:29:33.478287] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:50.969 passed 00:09:50.969 Test: test_nvme_pcie_hotplug_monitor ...passed 00:09:50.969 Test: test_shadow_doorbell_update ...passed 00:09:50.969 Test: test_build_contig_hw_sgl_request ...passed 00:09:50.969 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:09:50.969 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:09:50.969 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:09:50.969 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:09:50.969 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:09:50.969 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...[2024-10-01 12:29:33.478609] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:50.969 passed 00:09:50.969 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-10-01 12:29:33.478744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:09:50.969 passed 00:09:50.969 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:09:50.969 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:09:50.969 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:09:50.969 00:09:50.969 [2024-10-01 12:29:33.478894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:09:50.969 [2024-10-01 12:29:33.478977] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:09:50.969 [2024-10-01 12:29:33.479067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:09:50.969 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.969 suites 1 1 n/a 0 0 00:09:50.969 tests 14 14 14 0 0 00:09:50.969 asserts 235 235 235 0 n/a 00:09:50.969 00:09:50.969 Elapsed time = 0.002 seconds 00:09:51.249 12:29:33 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:09:51.249 00:09:51.249 00:09:51.249 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.249 http://cunit.sourceforge.net/ 00:09:51.249 00:09:51.249 00:09:51.249 Suite: nvme_ns_cmd 00:09:51.249 Test: nvme_poll_group_create_test ...passed 00:09:51.249 Test: nvme_poll_group_add_remove_test ...passed 00:09:51.249 Test: nvme_poll_group_process_completions ...passed 00:09:51.249 Test: nvme_poll_group_destroy_test ...passed 00:09:51.249 Test: nvme_poll_group_get_free_stats ...passed 00:09:51.249 00:09:51.249 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.249 suites 1 1 n/a 0 0 00:09:51.249 tests 5 5 5 0 0 00:09:51.249 asserts 75 75 75 0 n/a 00:09:51.249 00:09:51.249 Elapsed time = 0.001 seconds 00:09:51.249 12:29:33 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:09:51.249 00:09:51.249 00:09:51.249 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.249 http://cunit.sourceforge.net/ 00:09:51.249 00:09:51.249 00:09:51.249 Suite: nvme_quirks 00:09:51.249 Test: test_nvme_quirks_striping ...passed 00:09:51.249 00:09:51.249 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.249 suites 1 1 n/a 0 0 00:09:51.249 tests 1 1 1 0 0 00:09:51.249 asserts 5 5 5 0 n/a 00:09:51.249 00:09:51.249 Elapsed time = 0.000 seconds 00:09:51.249 12:29:33 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:09:51.249 00:09:51.249 00:09:51.249 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.249 http://cunit.sourceforge.net/ 00:09:51.249 00:09:51.249 00:09:51.249 Suite: nvme_tcp 00:09:51.249 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:09:51.249 Test: test_nvme_tcp_build_iovs ...passed 00:09:51.249 Test: test_nvme_tcp_build_sgl_request ...passed 00:09:51.250 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...[2024-10-01 12:29:33.614227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffe2b13f280, and the iovcnt=16, remaining_size=28672 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:09:51.250 Test: test_nvme_tcp_req_complete_safe ...passed 00:09:51.250 Test: test_nvme_tcp_req_get ...passed 00:09:51.250 Test: test_nvme_tcp_req_init ...passed 00:09:51.250 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:09:51.250 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:09:51.250 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:09:51.250 Test: test_nvme_tcp_alloc_reqs ...[2024-10-01 12:29:33.614921] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b140fa0 is same with the state(6) to be set 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-10-01 12:29:33.615328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b140130 is same with the state(5) to be set 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_pdu_ch_handle ...[2024-10-01 12:29:33.615403] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffe2b140c60 00:09:51.250 [2024-10-01 12:29:33.615471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:09:51.250 [2024-10-01 12:29:33.615580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b1405f0 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.615654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:09:51.250 [2024-10-01 12:29:33.615760] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b1405f0 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.615842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:09:51.250 [2024-10-01 12:29:33.615916] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b1405f0 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.615979] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b1405f0 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.616045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b1405f0 is same with the state(5) to be set 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_qpair_connect_sock ...[2024-10-01 12:29:33.616125] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b1405f0 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.616187] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b1405f0 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.616253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b1405f0 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.616431] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:09:51.250 [2024-10-01 12:29:33.616502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:51.250 [2024-10-01 12:29:33.616849] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:09:51.250 Test: test_nvme_tcp_c2h_payload_handle ...[2024-10-01 12:29:33.617007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffe2b1407a0): PDU Sequence Error 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_icresp_handle ...[2024-10-01 12:29:33.617148] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:09:51.250 [2024-10-01 12:29:33.617200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:09:51.250 [2024-10-01 12:29:33.617256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b140140 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.617322] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:09:51.250 [2024-10-01 12:29:33.617379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b140140 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.617454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b140140 is same with the state(0) to be set 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:09:51.250 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:09:51.250 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-10-01 12:29:33.617540] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffe2b140c60): PDU Sequence Error 00:09:51.250 [2024-10-01 12:29:33.617649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffe2b13f420 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-10-01 12:29:33.617831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffe2b13eaa0, errno=0, rc=0 00:09:51.250 [2024-10-01 12:29:33.617894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b13eaa0 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.617978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe2b13eaa0 is same with the state(5) to be set 00:09:51.250 [2024-10-01 12:29:33.618044] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe2b13eaa0 (0): Success 00:09:51.250 [2024-10-01 12:29:33.618108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe2b13eaa0 (0): Success 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-10-01 12:29:33.732463] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:51.250 [2024-10-01 12:29:33.732605] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:09:51.250 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:09:51.250 Test: test_nvme_tcp_ctrlr_construct ...[2024-10-01 12:29:33.732819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:51.250 [2024-10-01 12:29:33.732856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:51.250 [2024-10-01 12:29:33.733046] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:51.250 [2024-10-01 12:29:33.733087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:51.250 [2024-10-01 12:29:33.733208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:09:51.250 [2024-10-01 12:29:33.733276] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:51.250 [2024-10-01 12:29:33.733370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:09:51.250 [2024-10-01 12:29:33.733436] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:51.250 passed 00:09:51.250 Test: test_nvme_tcp_qpair_submit_request ...[2024-10-01 12:29:33.733572] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:09:51.250 [2024-10-01 12:29:33.733615] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:09:51.250 passed 00:09:51.250 00:09:51.250 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.250 suites 1 1 n/a 0 0 00:09:51.250 tests 27 27 27 0 0 00:09:51.250 asserts 624 624 624 0 n/a 00:09:51.250 00:09:51.250 Elapsed time = 0.120 seconds 00:09:51.250 12:29:33 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:09:51.509 00:09:51.509 00:09:51.509 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.509 http://cunit.sourceforge.net/ 00:09:51.509 00:09:51.509 00:09:51.509 Suite: nvme_transport 00:09:51.509 Test: test_nvme_get_transport ...passed 00:09:51.509 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:09:51.509 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:09:51.509 Test: test_nvme_transport_poll_group_add_remove ...passed 00:09:51.509 Test: test_ctrlr_get_memory_domains ...passed 00:09:51.509 00:09:51.510 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.510 suites 1 1 n/a 0 0 00:09:51.510 tests 5 5 5 0 0 00:09:51.510 asserts 28 28 28 0 n/a 00:09:51.510 00:09:51.510 Elapsed time = 0.000 seconds 00:09:51.510 12:29:33 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:09:51.510 00:09:51.510 00:09:51.510 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.510 http://cunit.sourceforge.net/ 00:09:51.510 00:09:51.510 00:09:51.510 Suite: nvme_io_msg 00:09:51.510 Test: test_nvme_io_msg_send ...passed 00:09:51.510 Test: test_nvme_io_msg_process ...passed 00:09:51.510 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:09:51.510 00:09:51.510 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.510 suites 1 1 n/a 0 0 00:09:51.510 tests 3 3 3 0 0 00:09:51.510 asserts 56 56 56 0 n/a 00:09:51.510 00:09:51.510 Elapsed time = 0.000 seconds 00:09:51.510 12:29:33 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:09:51.510 00:09:51.510 00:09:51.510 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.510 http://cunit.sourceforge.net/ 00:09:51.510 00:09:51.510 00:09:51.510 Suite: nvme_pcie_common 00:09:51.510 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-10-01 12:29:33.890958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:09:51.510 passed 00:09:51.510 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:09:51.510 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:09:51.510 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-10-01 12:29:33.892061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:09:51.510 [2024-10-01 12:29:33.892267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:09:51.510 [2024-10-01 12:29:33.892331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:09:51.510 passed 00:09:51.510 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:09:51.510 Test: test_nvme_pcie_poll_group_get_stats ...[2024-10-01 12:29:33.892992] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:51.510 passed 00:09:51.510 00:09:51.510 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.510 suites 1 1 n/a 0 0 00:09:51.510 tests 6 6 6 0 0 00:09:51.510 asserts 148 148 148 0 n/a 00:09:51.510 00:09:51.510 Elapsed time = 0.002 seconds 00:09:51.510 [2024-10-01 12:29:33.893083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:51.510 12:29:33 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:09:51.510 00:09:51.510 00:09:51.510 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.510 http://cunit.sourceforge.net/ 00:09:51.510 00:09:51.510 00:09:51.510 Suite: nvme_fabric 00:09:51.510 Test: test_nvme_fabric_prop_set_cmd ...passed 00:09:51.510 Test: test_nvme_fabric_prop_get_cmd ...passed 00:09:51.510 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:09:51.510 Test: test_nvme_fabric_discover_probe ...passed 00:09:51.510 Test: test_nvme_fabric_qpair_connect ...[2024-10-01 12:29:33.943015] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:09:51.510 passed 00:09:51.510 00:09:51.510 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.510 suites 1 1 n/a 0 0 00:09:51.510 tests 5 5 5 0 0 00:09:51.510 asserts 60 60 60 0 n/a 00:09:51.510 00:09:51.510 Elapsed time = 0.001 seconds 00:09:51.510 12:29:33 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:09:51.510 00:09:51.510 00:09:51.510 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.510 http://cunit.sourceforge.net/ 00:09:51.510 00:09:51.510 00:09:51.510 Suite: nvme_opal 00:09:51.510 Test: test_opal_nvme_security_recv_send_done ...passed 00:09:51.510 Test: test_opal_add_short_atom_header ...passed 00:09:51.510 00:09:51.510 [2024-10-01 12:29:33.993438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:09:51.510 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.510 suites 1 1 n/a 0 0 00:09:51.510 tests 2 2 2 0 0 00:09:51.510 asserts 22 22 22 0 n/a 00:09:51.510 00:09:51.510 Elapsed time = 0.001 seconds 00:09:51.510 00:09:51.510 real 0m1.392s 00:09:51.510 user 0m0.611s 00:09:51.510 sys 0m0.642s 00:09:51.510 12:29:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.510 12:29:34 -- common/autotest_common.sh@10 -- # set +x 00:09:51.510 ************************************ 00:09:51.510 END TEST unittest_nvme 00:09:51.510 ************************************ 00:09:51.769 12:29:34 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:51.769 12:29:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:51.769 12:29:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:51.769 12:29:34 -- common/autotest_common.sh@10 -- # set +x 00:09:51.769 ************************************ 00:09:51.769 START TEST unittest_log 00:09:51.769 ************************************ 00:09:51.769 12:29:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:51.769 00:09:51.769 00:09:51.769 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.769 http://cunit.sourceforge.net/ 00:09:51.769 00:09:51.769 00:09:51.769 Suite: log 00:09:51.769 Test: log_test ...[2024-10-01 12:29:34.103415] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:09:51.769 [2024-10-01 12:29:34.103814] log_ut.c: 55:log_test: *DEBUG*: log test 00:09:51.769 log dump test: 00:09:51.769 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:09:51.769 spdk dump test: 00:09:51.769 passed 00:09:51.769 Test: deprecation ...00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:09:51.769 spdk dump test: 00:09:51.769 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:09:51.769 00000010 65 20 63 68 61 72 73 e chars 00:09:52.707 passed 00:09:52.707 00:09:52.707 Run Summary: Type Total Ran Passed Failed Inactive 00:09:52.707 suites 1 1 n/a 0 0 00:09:52.707 tests 2 2 2 0 0 00:09:52.707 asserts 73 73 73 0 n/a 00:09:52.707 00:09:52.707 Elapsed time = 0.001 seconds 00:09:52.707 00:09:52.707 real 0m1.051s 00:09:52.707 user 0m0.024s 00:09:52.707 sys 0m0.028s 00:09:52.707 12:29:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.707 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:52.707 ************************************ 00:09:52.707 END TEST unittest_log 00:09:52.707 ************************************ 00:09:52.707 12:29:35 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:52.707 12:29:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:52.707 12:29:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.707 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:52.707 ************************************ 00:09:52.707 START TEST unittest_lvol 00:09:52.707 ************************************ 00:09:52.707 12:29:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:52.968 00:09:52.968 00:09:52.968 CUnit - A unit testing framework for C - Version 2.1-3 00:09:52.968 http://cunit.sourceforge.net/ 00:09:52.968 00:09:52.968 00:09:52.968 Suite: lvol 00:09:52.968 Test: lvs_init_unload_success ...[2024-10-01 12:29:35.242164] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:09:52.968 passed 00:09:52.968 Test: lvs_init_destroy_success ...[2024-10-01 12:29:35.242918] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:09:52.968 passed 00:09:52.968 Test: lvs_init_opts_success ...passed 00:09:52.968 Test: lvs_unload_lvs_is_null_fail ...passed 00:09:52.968 Test: lvs_names ...[2024-10-01 12:29:35.243267] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:09:52.968 [2024-10-01 12:29:35.243326] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:09:52.968 [2024-10-01 12:29:35.243382] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:09:52.968 [2024-10-01 12:29:35.243557] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:09:52.968 passed 00:09:52.968 Test: lvol_create_destroy_success ...passed 00:09:52.968 Test: lvol_create_fail ...[2024-10-01 12:29:35.244282] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:09:52.968 [2024-10-01 12:29:35.244407] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:09:52.968 passed 00:09:52.968 Test: lvol_destroy_fail ...[2024-10-01 12:29:35.244681] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:09:52.968 passed 00:09:52.968 Test: lvol_close ...[2024-10-01 12:29:35.244891] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:09:52.968 [2024-10-01 12:29:35.244967] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:09:52.968 passed 00:09:52.968 Test: lvol_resize ...passed 00:09:52.968 Test: lvol_set_read_only ...passed 00:09:52.968 Test: test_lvs_load ...[2024-10-01 12:29:35.245729] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:09:52.968 passed 00:09:52.968 Test: lvols_load ...[2024-10-01 12:29:35.245781] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:09:52.968 [2024-10-01 12:29:35.246008] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:52.968 passed 00:09:52.968 Test: lvol_open ...[2024-10-01 12:29:35.246149] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:52.968 passed 00:09:52.968 Test: lvol_snapshot ...passed 00:09:52.968 Test: lvol_snapshot_fail ...[2024-10-01 12:29:35.246885] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:09:52.968 passed 00:09:52.968 Test: lvol_clone ...passed 00:09:52.968 Test: lvol_clone_fail ...[2024-10-01 12:29:35.247466] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:09:52.968 passed 00:09:52.968 Test: lvol_iter_clones ...passed 00:09:52.968 Test: lvol_refcnt ...[2024-10-01 12:29:35.248009] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 258476f9-b798-467d-922b-623a827e58ec because it is still open 00:09:52.968 passed 00:09:52.968 Test: lvol_names ...[2024-10-01 12:29:35.248238] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:52.968 [2024-10-01 12:29:35.248328] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:52.968 [2024-10-01 12:29:35.248567] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:09:52.968 passed 00:09:52.968 Test: lvol_create_thin_provisioned ...passed 00:09:52.968 Test: lvol_rename ...[2024-10-01 12:29:35.249055] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:52.968 [2024-10-01 12:29:35.249152] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:09:52.968 passed 00:09:52.968 Test: lvs_rename ...[2024-10-01 12:29:35.249405] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:09:52.968 passed 00:09:52.968 Test: lvol_inflate ...[2024-10-01 12:29:35.249593] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:52.968 passed 00:09:52.968 Test: lvol_decouple_parent ...[2024-10-01 12:29:35.249862] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:52.968 passed 00:09:52.968 Test: lvol_get_xattr ...passed 00:09:52.968 Test: lvol_esnap_reload ...passed 00:09:52.968 Test: lvol_esnap_create_bad_args ...[2024-10-01 12:29:35.250307] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:09:52.968 [2024-10-01 12:29:35.250359] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:52.968 [2024-10-01 12:29:35.250406] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:09:52.968 [2024-10-01 12:29:35.250545] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:52.968 [2024-10-01 12:29:35.250698] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:09:52.968 passed 00:09:52.968 Test: lvol_esnap_create_delete ...passed 00:09:52.968 Test: lvol_esnap_load_esnaps ...passed 00:09:52.968 Test: lvol_esnap_missing ...[2024-10-01 12:29:35.251033] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:09:52.968 [2024-10-01 12:29:35.251198] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:52.968 [2024-10-01 12:29:35.251251] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:52.968 passed 00:09:52.968 Test: lvol_esnap_hotplug ... 00:09:52.968 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:09:52.968 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:09:52.968 [2024-10-01 12:29:35.251941] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 0b544105-336c-41f4-808a-cf65b2041c84: failed to create esnap bs_dev: error -12 00:09:52.968 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:09:52.968 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:09:52.968 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:09:52.968 [2024-10-01 12:29:35.252122] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol a6f317c3-6356-4a7c-9864-1dc9d8dae0ae: failed to create esnap bs_dev: error -12 00:09:52.968 [2024-10-01 12:29:35.252226] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 6c1864a9-44e9-4db6-a94e-29855a42ca9c: failed to create esnap bs_dev: error -12 00:09:52.968 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:09:52.968 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:09:52.968 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:09:52.968 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:09:52.969 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:09:52.969 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:09:52.969 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:09:52.969 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:09:52.969 passed 00:09:52.969 Test: lvol_get_by ...passed 00:09:52.969 00:09:52.969 Run Summary: Type Total Ran Passed Failed Inactive 00:09:52.969 suites 1 1 n/a 0 0 00:09:52.969 tests 34 34 34 0 0 00:09:52.969 asserts 1439 1439 1439 0 n/a 00:09:52.969 00:09:52.969 Elapsed time = 0.011 seconds 00:09:52.969 00:09:52.969 real 0m0.063s 00:09:52.969 user 0m0.038s 00:09:52.969 sys 0m0.025s 00:09:52.969 12:29:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.969 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:52.969 ************************************ 00:09:52.969 END TEST unittest_lvol 00:09:52.969 ************************************ 00:09:52.969 12:29:35 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:52.969 12:29:35 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:52.969 12:29:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:52.969 12:29:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.969 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:52.969 ************************************ 00:09:52.969 START TEST unittest_nvme_rdma 00:09:52.969 ************************************ 00:09:52.969 12:29:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:52.969 00:09:52.969 00:09:52.969 CUnit - A unit testing framework for C - Version 2.1-3 00:09:52.969 http://cunit.sourceforge.net/ 00:09:52.969 00:09:52.969 00:09:52.969 Suite: nvme_rdma 00:09:52.969 Test: test_nvme_rdma_build_sgl_request ...[2024-10-01 12:29:35.382551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:09:52.969 [2024-10-01 12:29:35.383078] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:52.969 passed 00:09:52.969 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:09:52.969 Test: test_nvme_rdma_build_contig_request ...[2024-10-01 12:29:35.383238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:09:52.969 [2024-10-01 12:29:35.383375] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:52.969 passed 00:09:52.969 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:09:52.969 Test: test_nvme_rdma_create_reqs ...[2024-10-01 12:29:35.383562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:09:52.969 passed 00:09:52.969 Test: test_nvme_rdma_create_rsps ...[2024-10-01 12:29:35.384089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:09:52.969 passed 00:09:52.969 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-10-01 12:29:35.384409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:52.969 passed 00:09:52.969 Test: test_nvme_rdma_poller_create ...[2024-10-01 12:29:35.384502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:52.969 passed 00:09:52.969 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:09:52.969 Test: test_nvme_rdma_ctrlr_construct ...[2024-10-01 12:29:35.384748] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:09:52.969 passed 00:09:52.969 Test: test_nvme_rdma_req_put_and_get ...passed 00:09:52.969 Test: test_nvme_rdma_req_init ...passed 00:09:52.969 Test: test_nvme_rdma_validate_cm_event ...[2024-10-01 12:29:35.385103] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:09:52.969 passed 00:09:52.969 Test: test_nvme_rdma_qpair_init ...passed 00:09:52.969 Test: test_nvme_rdma_qpair_submit_request ...passed 00:09:52.969 Test: test_nvme_rdma_memory_domain ...[2024-10-01 12:29:35.385158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:09:52.969 [2024-10-01 12:29:35.385344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:09:52.969 passed 00:09:52.969 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:09:52.969 Test: test_rdma_get_memory_translation ...[2024-10-01 12:29:35.385441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:09:52.969 passed 00:09:52.969 Test: test_get_rdma_qpair_from_wc ...passed 00:09:52.969 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:09:52.969 Test: test_nvme_rdma_poll_group_get_stats ...[2024-10-01 12:29:35.385506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:09:52.969 [2024-10-01 12:29:35.385599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:52.969 [2024-10-01 12:29:35.385649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:52.969 passed 00:09:52.969 Test: test_nvme_rdma_qpair_set_poller ...[2024-10-01 12:29:35.385770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:52.969 [2024-10-01 12:29:35.385830] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:09:52.969 [2024-10-01 12:29:35.385875] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd03fbf7e0 on poll group 0x60b0000001a0 00:09:52.969 [2024-10-01 12:29:35.385946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:52.969 passed 00:09:52.969 00:09:52.969 Run Summary: Type Total Ran Passed Failed Inactive 00:09:52.969 suites 1 1 n/a 0 0 00:09:52.969 tests 22 22 22 0 0 00:09:52.969 asserts 412 412 412 0 n/a 00:09:52.969 00:09:52.969 Elapsed time = 0.004 seconds 00:09:52.969 [2024-10-01 12:29:35.385994] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:09:52.969 [2024-10-01 12:29:35.386041] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd03fbf7e0 on poll group 0x60b0000001a0 00:09:52.969 [2024-10-01 12:29:35.386118] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:52.969 00:09:52.969 real 0m0.055s 00:09:52.969 user 0m0.016s 00:09:52.969 sys 0m0.040s 00:09:52.969 12:29:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.969 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:52.969 ************************************ 00:09:52.969 END TEST unittest_nvme_rdma 00:09:52.969 ************************************ 00:09:52.969 12:29:35 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:52.969 12:29:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:52.969 12:29:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.969 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:52.969 ************************************ 00:09:52.969 START TEST unittest_nvmf_transport 00:09:52.969 ************************************ 00:09:52.969 12:29:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:53.230 00:09:53.230 00:09:53.230 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.230 http://cunit.sourceforge.net/ 00:09:53.230 00:09:53.230 00:09:53.230 Suite: nvmf 00:09:53.230 Test: test_spdk_nvmf_transport_create ...[2024-10-01 12:29:35.520707] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:09:53.230 [2024-10-01 12:29:35.521108] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:09:53.230 [2024-10-01 12:29:35.521190] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:09:53.230 [2024-10-01 12:29:35.521345] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:09:53.230 passed 00:09:53.230 Test: test_nvmf_transport_poll_group_create ...passed 00:09:53.230 Test: test_spdk_nvmf_transport_opts_init ...[2024-10-01 12:29:35.521660] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:09:53.230 [2024-10-01 12:29:35.521772] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:09:53.230 [2024-10-01 12:29:35.521832] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:09:53.230 passed 00:09:53.230 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:09:53.230 00:09:53.230 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.230 suites 1 1 n/a 0 0 00:09:53.230 tests 4 4 4 0 0 00:09:53.230 asserts 49 49 49 0 n/a 00:09:53.230 00:09:53.230 Elapsed time = 0.001 seconds 00:09:53.230 00:09:53.230 real 0m0.050s 00:09:53.230 user 0m0.016s 00:09:53.230 sys 0m0.034s 00:09:53.230 12:29:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.230 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:53.230 ************************************ 00:09:53.230 END TEST unittest_nvmf_transport 00:09:53.230 ************************************ 00:09:53.230 12:29:35 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:53.230 12:29:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:53.230 12:29:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:53.230 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:53.230 ************************************ 00:09:53.230 START TEST unittest_rdma 00:09:53.230 ************************************ 00:09:53.230 12:29:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:53.230 00:09:53.230 00:09:53.230 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.230 http://cunit.sourceforge.net/ 00:09:53.230 00:09:53.230 00:09:53.230 Suite: rdma_common 00:09:53.230 Test: test_spdk_rdma_pd ...[2024-10-01 12:29:35.640895] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:09:53.230 [2024-10-01 12:29:35.641432] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:09:53.230 passed 00:09:53.230 00:09:53.230 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.230 suites 1 1 n/a 0 0 00:09:53.230 tests 1 1 1 0 0 00:09:53.230 asserts 31 31 31 0 n/a 00:09:53.230 00:09:53.230 Elapsed time = 0.001 seconds 00:09:53.230 00:09:53.230 real 0m0.050s 00:09:53.230 user 0m0.030s 00:09:53.230 sys 0m0.021s 00:09:53.230 12:29:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.230 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:53.230 ************************************ 00:09:53.230 END TEST unittest_rdma 00:09:53.230 ************************************ 00:09:53.230 12:29:35 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:53.230 12:29:35 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:53.230 12:29:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:53.230 12:29:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:53.230 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:53.230 ************************************ 00:09:53.230 START TEST unittest_nvme_cuse 00:09:53.230 ************************************ 00:09:53.230 12:29:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:53.491 00:09:53.491 00:09:53.491 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.491 http://cunit.sourceforge.net/ 00:09:53.491 00:09:53.491 00:09:53.491 Suite: nvme_cuse 00:09:53.491 Test: test_cuse_nvme_submit_io_read_write ...passed 00:09:53.491 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:09:53.491 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:09:53.491 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:09:53.491 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:09:53.491 Test: test_cuse_nvme_submit_io ...[2024-10-01 12:29:35.771957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:09:53.491 passed 00:09:53.491 Test: test_cuse_nvme_reset ...passed 00:09:53.491 Test: test_nvme_cuse_stop ...[2024-10-01 12:29:35.772301] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:09:53.491 passed 00:09:53.491 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:09:53.491 00:09:53.491 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.491 suites 1 1 n/a 0 0 00:09:53.491 tests 9 9 9 0 0 00:09:53.491 asserts 121 121 121 0 n/a 00:09:53.491 00:09:53.491 Elapsed time = 0.002 seconds 00:09:53.491 00:09:53.491 real 0m0.053s 00:09:53.491 user 0m0.032s 00:09:53.491 sys 0m0.021s 00:09:53.491 12:29:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.491 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:53.491 ************************************ 00:09:53.491 END TEST unittest_nvme_cuse 00:09:53.491 ************************************ 00:09:53.491 12:29:35 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:09:53.491 12:29:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:53.491 12:29:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:53.491 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:53.491 ************************************ 00:09:53.491 START TEST unittest_nvmf 00:09:53.491 ************************************ 00:09:53.491 12:29:35 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:09:53.491 12:29:35 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:09:53.491 00:09:53.491 00:09:53.491 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.491 http://cunit.sourceforge.net/ 00:09:53.491 00:09:53.491 00:09:53.491 Suite: nvmf 00:09:53.491 Test: test_get_log_page ...[2024-10-01 12:29:35.904768] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:09:53.491 passed 00:09:53.491 Test: test_process_fabrics_cmd ...passed 00:09:53.491 Test: test_connect ...[2024-10-01 12:29:35.905686] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:09:53.491 [2024-10-01 12:29:35.905810] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:09:53.491 [2024-10-01 12:29:35.905871] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:09:53.491 [2024-10-01 12:29:35.905915] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:09:53.491 [2024-10-01 12:29:35.906034] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:09:53.491 [2024-10-01 12:29:35.906077] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:09:53.491 [2024-10-01 12:29:35.906206] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:09:53.491 [2024-10-01 12:29:35.906255] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:09:53.491 [2024-10-01 12:29:35.906366] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:09:53.491 [2024-10-01 12:29:35.906433] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:09:53.491 [2024-10-01 12:29:35.906691] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:09:53.491 [2024-10-01 12:29:35.906770] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:09:53.491 [2024-10-01 12:29:35.906867] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:09:53.491 [2024-10-01 12:29:35.906934] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:09:53.491 [2024-10-01 12:29:35.907049] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:09:53.491 passed 00:09:53.491 Test: test_get_ns_id_desc_list ...[2024-10-01 12:29:35.907180] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:09:53.491 passed 00:09:53.491 Test: test_identify_ns ...[2024-10-01 12:29:35.907401] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:53.491 [2024-10-01 12:29:35.907602] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:09:53.491 [2024-10-01 12:29:35.907743] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:09:53.491 passed 00:09:53.491 Test: test_identify_ns_iocs_specific ...[2024-10-01 12:29:35.907997] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:53.491 [2024-10-01 12:29:35.908275] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:53.491 passed 00:09:53.491 Test: test_reservation_write_exclusive ...passed 00:09:53.491 Test: test_reservation_exclusive_access ...passed 00:09:53.491 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:09:53.491 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:09:53.491 Test: test_reservation_notification_log_page ...passed 00:09:53.491 Test: test_get_dif_ctx ...passed 00:09:53.491 Test: test_set_get_features ...[2024-10-01 12:29:35.908846] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:53.491 passed 00:09:53.491 Test: test_identify_ctrlr ...passed 00:09:53.491 Test: test_identify_ctrlr_iocs_specific ...[2024-10-01 12:29:35.908894] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:53.491 [2024-10-01 12:29:35.908946] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:09:53.491 [2024-10-01 12:29:35.909022] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:09:53.491 passed 00:09:53.491 Test: test_custom_admin_cmd ...passed 00:09:53.491 Test: test_fused_compare_and_write ...[2024-10-01 12:29:35.909445] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:09:53.491 [2024-10-01 12:29:35.909496] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:53.491 [2024-10-01 12:29:35.909550] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:53.491 passed 00:09:53.491 Test: test_multi_async_event_reqs ...passed 00:09:53.491 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:09:53.491 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:09:53.491 Test: test_multi_async_events ...passed 00:09:53.491 Test: test_rae ...passed 00:09:53.491 Test: test_nvmf_ctrlr_create_destruct ...passed 00:09:53.491 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:09:53.491 Test: test_spdk_nvmf_request_zcopy_start ...[2024-10-01 12:29:35.909978] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:09:53.491 passed 00:09:53.492 Test: test_zcopy_read ...passed 00:09:53.492 Test: test_zcopy_write ...passed 00:09:53.492 Test: test_nvmf_property_set ...passed 00:09:53.492 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-10-01 12:29:35.910139] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:53.492 [2024-10-01 12:29:35.910232] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:53.492 passed 00:09:53.492 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-10-01 12:29:35.910289] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:09:53.492 [2024-10-01 12:29:35.910338] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:09:53.492 [2024-10-01 12:29:35.910378] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:09:53.492 passed 00:09:53.492 00:09:53.492 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.492 suites 1 1 n/a 0 0 00:09:53.492 tests 30 30 30 0 0 00:09:53.492 asserts 885 885 885 0 n/a 00:09:53.492 00:09:53.492 Elapsed time = 0.006 seconds 00:09:53.492 12:29:35 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:09:53.492 00:09:53.492 00:09:53.492 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.492 http://cunit.sourceforge.net/ 00:09:53.492 00:09:53.492 00:09:53.492 Suite: nvmf 00:09:53.492 Test: test_get_rw_params ...passed 00:09:53.492 Test: test_lba_in_range ...passed 00:09:53.492 Test: test_get_dif_ctx ...passed 00:09:53.492 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:09:53.492 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-10-01 12:29:35.952060] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:09:53.492 [2024-10-01 12:29:35.952468] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:09:53.492 [2024-10-01 12:29:35.952620] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:09:53.492 passed 00:09:53.492 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-10-01 12:29:35.952707] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:09:53.492 [2024-10-01 12:29:35.952851] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:09:53.492 passed 00:09:53.492 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-10-01 12:29:35.953020] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:09:53.492 [2024-10-01 12:29:35.953086] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:09:53.492 [2024-10-01 12:29:35.953187] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:09:53.492 [2024-10-01 12:29:35.953248] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:09:53.492 passed 00:09:53.492 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:09:53.492 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:09:53.492 00:09:53.492 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.492 suites 1 1 n/a 0 0 00:09:53.492 tests 9 9 9 0 0 00:09:53.492 asserts 157 157 157 0 n/a 00:09:53.492 00:09:53.492 Elapsed time = 0.001 seconds 00:09:53.492 12:29:35 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:09:53.492 00:09:53.492 00:09:53.492 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.492 http://cunit.sourceforge.net/ 00:09:53.492 00:09:53.492 00:09:53.492 Suite: nvmf 00:09:53.492 Test: test_discovery_log ...passed 00:09:53.492 Test: test_discovery_log_with_filters ...passed 00:09:53.492 00:09:53.492 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.492 suites 1 1 n/a 0 0 00:09:53.492 tests 2 2 2 0 0 00:09:53.492 asserts 238 238 238 0 n/a 00:09:53.492 00:09:53.492 Elapsed time = 0.003 seconds 00:09:53.753 12:29:36 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:09:53.753 00:09:53.753 00:09:53.753 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.753 http://cunit.sourceforge.net/ 00:09:53.753 00:09:53.753 00:09:53.753 Suite: nvmf 00:09:53.753 Test: nvmf_test_create_subsystem ...[2024-10-01 12:29:36.059471] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:09:53.753 [2024-10-01 12:29:36.059857] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:09:53.753 [2024-10-01 12:29:36.059978] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:09:53.753 [2024-10-01 12:29:36.060038] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:09:53.753 [2024-10-01 12:29:36.060086] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:09:53.753 [2024-10-01 12:29:36.060143] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:09:53.753 [2024-10-01 12:29:36.060296] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:09:53.753 [2024-10-01 12:29:36.060522] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:09:53.753 [2024-10-01 12:29:36.060648] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:09:53.753 [2024-10-01 12:29:36.060705] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:53.753 [2024-10-01 12:29:36.060747] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:53.753 passed 00:09:53.753 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-10-01 12:29:36.060970] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:09:53.753 [2024-10-01 12:29:36.061113] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:09:53.753 passed 00:09:53.753 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:09:53.753 Test: test_reservation_register ...[2024-10-01 12:29:36.061430] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:53.753 [2024-10-01 12:29:36.061573] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:09:53.753 passed 00:09:53.753 Test: test_reservation_register_with_ptpl ...passed 00:09:53.753 Test: test_reservation_acquire_preempt_1 ...[2024-10-01 12:29:36.062676] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:53.753 passed 00:09:53.753 Test: test_reservation_acquire_release_with_ptpl ...passed 00:09:53.753 Test: test_reservation_release ...[2024-10-01 12:29:36.064724] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:53.753 passed 00:09:53.753 Test: test_reservation_unregister_notification ...[2024-10-01 12:29:36.065016] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:53.753 passed 00:09:53.753 Test: test_reservation_release_notification ...[2024-10-01 12:29:36.065301] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:53.753 passed 00:09:53.753 Test: test_reservation_release_notification_write_exclusive ...[2024-10-01 12:29:36.065562] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:53.753 passed 00:09:53.753 Test: test_reservation_clear_notification ...[2024-10-01 12:29:36.065847] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:53.753 passed 00:09:53.753 Test: test_reservation_preempt_notification ...[2024-10-01 12:29:36.066134] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:53.753 passed 00:09:53.753 Test: test_spdk_nvmf_ns_event ...passed 00:09:53.754 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:09:53.754 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:09:53.754 Test: test_spdk_nvmf_subsystem_add_host ...[2024-10-01 12:29:36.066920] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_ns_reservation_report ...[2024-10-01 12:29:36.067034] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_nqn_is_valid ...[2024-10-01 12:29:36.067164] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_ns_reservation_restore ...[2024-10-01 12:29:36.067230] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:09:53.754 [2024-10-01 12:29:36.067293] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:d9653e31-4ac8-4333-a4a7-e53046a2451": uuid is not the correct length 00:09:53.754 [2024-10-01 12:29:36.067325] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:09:53.754 [2024-10-01 12:29:36.067441] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_subsystem_state_change ...passed 00:09:53.754 Test: test_nvmf_reservation_custom_ops ...passed 00:09:53.754 00:09:53.754 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.754 suites 1 1 n/a 0 0 00:09:53.754 tests 22 22 22 0 0 00:09:53.754 asserts 407 407 407 0 n/a 00:09:53.754 00:09:53.754 Elapsed time = 0.009 seconds 00:09:53.754 12:29:36 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:09:53.754 00:09:53.754 00:09:53.754 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.754 http://cunit.sourceforge.net/ 00:09:53.754 00:09:53.754 00:09:53.754 Suite: nvmf 00:09:53.754 Test: test_nvmf_tcp_create ...[2024-10-01 12:29:36.149472] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_tcp_destroy ...passed 00:09:53.754 Test: test_nvmf_tcp_poll_group_create ...passed 00:09:53.754 Test: test_nvmf_tcp_send_c2h_data ...passed 00:09:53.754 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:09:53.754 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:09:53.754 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:09:53.754 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-10-01 12:29:36.233941] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.234017] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f12b0 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.234092] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f12b0 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.234127] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.234155] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f12b0 is same with the state(5) to be set 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:09:53.754 Test: test_nvmf_tcp_icreq_handle ...passed 00:09:53.754 Test: test_nvmf_tcp_check_xfer_type ...[2024-10-01 12:29:36.234223] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:53.754 [2024-10-01 12:29:36.234295] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.234339] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f12b0 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.234366] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_tcp_invalid_sgl ...[2024-10-01 12:29:36.234416] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f12b0 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.234444] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.234476] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f12b0 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.234512] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.234558] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f12b0 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.234614] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:09:53.754 [2024-10-01 12:29:36.234651] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.234679] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f12b0 is same with the state(5) to be set 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-10-01 12:29:36.234730] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffe0f9f2010 00:09:53.754 [2024-10-01 12:29:36.234808] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.234851] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f1770 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.234890] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffe0f9f1770 00:09:53.754 [2024-10-01 12:29:36.234921] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.234954] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f1770 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.234977] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:09:53.754 [2024-10-01 12:29:36.235014] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.235054] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f1770 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.235091] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:09:53.754 [2024-10-01 12:29:36.235122] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.235158] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f1770 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.235188] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.235220] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f1770 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.235268] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.235297] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f1770 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.235333] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.235355] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f1770 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.235391] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.235418] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f1770 is same with the state(5) to be set 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-10-01 12:29:36.235463] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.235491] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f1770 is same with the state(5) to be set 00:09:53.754 [2024-10-01 12:29:36.235532] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:53.754 [2024-10-01 12:29:36.235554] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe0f9f1770 is same with the state(5) to be set 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:09:53.754 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-10-01 12:29:36.254561] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:09:53.754 [2024-10-01 12:29:36.254641] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:09:53.754 passed 00:09:53.754 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-10-01 12:29:36.254952] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:09:53.754 [2024-10-01 12:29:36.254989] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:09:53.754 [2024-10-01 12:29:36.255148] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:09:53.754 passed 00:09:53.754 00:09:53.754 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.754 suites 1 1 n/a 0 0 00:09:53.754 tests 17 17 17 0 0 00:09:53.754 asserts 222 222 222 0 n/a 00:09:53.754 00:09:53.754 Elapsed time = 0.127 seconds 00:09:53.754 [2024-10-01 12:29:36.255183] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:09:54.014 12:29:36 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:09:54.014 00:09:54.014 00:09:54.014 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.014 http://cunit.sourceforge.net/ 00:09:54.014 00:09:54.014 00:09:54.014 Suite: nvmf 00:09:54.014 Test: test_nvmf_tgt_create_poll_group ...passed 00:09:54.014 00:09:54.014 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.014 suites 1 1 n/a 0 0 00:09:54.014 tests 1 1 1 0 0 00:09:54.014 asserts 17 17 17 0 n/a 00:09:54.014 00:09:54.014 Elapsed time = 0.024 seconds 00:09:54.014 ************************************ 00:09:54.014 END TEST unittest_nvmf 00:09:54.014 ************************************ 00:09:54.014 00:09:54.014 real 0m0.569s 00:09:54.014 user 0m0.273s 00:09:54.014 sys 0m0.298s 00:09:54.014 12:29:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.014 12:29:36 -- common/autotest_common.sh@10 -- # set +x 00:09:54.014 12:29:36 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:54.014 12:29:36 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:54.014 12:29:36 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:54.014 12:29:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:54.014 12:29:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:54.014 12:29:36 -- common/autotest_common.sh@10 -- # set +x 00:09:54.014 ************************************ 00:09:54.014 START TEST unittest_nvmf_rdma 00:09:54.014 ************************************ 00:09:54.014 12:29:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:54.273 00:09:54.273 00:09:54.273 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.273 http://cunit.sourceforge.net/ 00:09:54.273 00:09:54.273 00:09:54.273 Suite: nvmf 00:09:54.273 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-10-01 12:29:36.555986] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:09:54.273 [2024-10-01 12:29:36.556368] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:09:54.273 [2024-10-01 12:29:36.556420] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:09:54.273 passed 00:09:54.273 Test: test_spdk_nvmf_rdma_request_process ...passed 00:09:54.273 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:09:54.273 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:09:54.273 Test: test_nvmf_rdma_opts_init ...passed 00:09:54.273 Test: test_nvmf_rdma_request_free_data ...passed 00:09:54.273 Test: test_nvmf_rdma_update_ibv_state ...passed 00:09:54.273 Test: test_nvmf_rdma_resources_create ...[2024-10-01 12:29:36.557705] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:09:54.273 [2024-10-01 12:29:36.557761] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:09:54.273 passed 00:09:54.273 Test: test_nvmf_rdma_qpair_compare ...passed 00:09:54.273 Test: test_nvmf_rdma_resize_cq ...[2024-10-01 12:29:36.559136] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:09:54.273 Using CQ of insufficient size may lead to CQ overrun 00:09:54.273 [2024-10-01 12:29:36.559247] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:09:54.273 passed 00:09:54.273 00:09:54.273 [2024-10-01 12:29:36.559320] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:54.273 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.273 suites 1 1 n/a 0 0 00:09:54.273 tests 10 10 10 0 0 00:09:54.273 asserts 584 584 584 0 n/a 00:09:54.273 00:09:54.273 Elapsed time = 0.004 seconds 00:09:54.273 00:09:54.273 real 0m0.053s 00:09:54.273 user 0m0.025s 00:09:54.273 sys 0m0.028s 00:09:54.273 12:29:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.273 12:29:36 -- common/autotest_common.sh@10 -- # set +x 00:09:54.273 ************************************ 00:09:54.273 END TEST unittest_nvmf_rdma 00:09:54.273 ************************************ 00:09:54.273 12:29:36 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:54.273 12:29:36 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:09:54.274 12:29:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:54.274 12:29:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:54.274 12:29:36 -- common/autotest_common.sh@10 -- # set +x 00:09:54.274 ************************************ 00:09:54.274 START TEST unittest_scsi 00:09:54.274 ************************************ 00:09:54.274 12:29:36 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:09:54.274 12:29:36 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:09:54.274 00:09:54.274 00:09:54.274 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.274 http://cunit.sourceforge.net/ 00:09:54.274 00:09:54.274 00:09:54.274 Suite: dev_suite 00:09:54.274 Test: dev_destruct_null_dev ...passed 00:09:54.274 Test: dev_destruct_zero_luns ...passed 00:09:54.274 Test: dev_destruct_null_lun ...passed 00:09:54.274 Test: dev_destruct_success ...passed 00:09:54.274 Test: dev_construct_num_luns_zero ...[2024-10-01 12:29:36.693057] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:09:54.274 passed 00:09:54.274 Test: dev_construct_no_lun_zero ...passed 00:09:54.274 Test: dev_construct_null_lun ...passed 00:09:54.274 Test: dev_construct_name_too_long ...[2024-10-01 12:29:36.693558] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:09:54.274 [2024-10-01 12:29:36.693643] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:09:54.274 passed 00:09:54.274 Test: dev_construct_success ...passed 00:09:54.274 Test: dev_construct_success_lun_zero_not_first ...passed 00:09:54.274 Test: dev_queue_mgmt_task_success ...[2024-10-01 12:29:36.693725] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:09:54.274 passed 00:09:54.274 Test: dev_queue_task_success ...passed 00:09:54.274 Test: dev_stop_success ...passed 00:09:54.274 Test: dev_add_port_max_ports ...passed 00:09:54.274 Test: dev_add_port_construct_failure1 ...[2024-10-01 12:29:36.694192] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:09:54.274 [2024-10-01 12:29:36.694367] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:09:54.274 passed 00:09:54.274 Test: dev_add_port_construct_failure2 ...passed 00:09:54.274 Test: dev_add_port_success1 ...passed 00:09:54.274 Test: dev_add_port_success2 ...passed 00:09:54.274 Test: dev_add_port_success3 ...passed 00:09:54.274 Test: dev_find_port_by_id_num_ports_zero ...passed 00:09:54.274 Test: dev_find_port_by_id_id_not_found_failure ...[2024-10-01 12:29:36.694507] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:09:54.274 passed 00:09:54.274 Test: dev_find_port_by_id_success ...passed 00:09:54.274 Test: dev_add_lun_bdev_not_found ...passed 00:09:54.274 Test: dev_add_lun_no_free_lun_id ...[2024-10-01 12:29:36.695194] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:09:54.274 passed 00:09:54.274 Test: dev_add_lun_success1 ...passed 00:09:54.274 Test: dev_add_lun_success2 ...passed 00:09:54.274 Test: dev_check_pending_tasks ...passed 00:09:54.274 Test: dev_iterate_luns ...passed 00:09:54.274 Test: dev_find_free_lun ...passed 00:09:54.274 00:09:54.274 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.274 suites 1 1 n/a 0 0 00:09:54.274 tests 29 29 29 0 0 00:09:54.274 asserts 97 97 97 0 n/a 00:09:54.274 00:09:54.274 Elapsed time = 0.003 seconds 00:09:54.274 12:29:36 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:09:54.274 00:09:54.274 00:09:54.274 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.274 http://cunit.sourceforge.net/ 00:09:54.274 00:09:54.274 00:09:54.274 Suite: lun_suite 00:09:54.274 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:09:54.274 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-10-01 12:29:36.743360] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:09:54.274 passed 00:09:54.274 Test: lun_task_mgmt_execute_lun_reset ...passed 00:09:54.274 Test: lun_task_mgmt_execute_target_reset ...passed 00:09:54.274 Test: lun_task_mgmt_execute_invalid_case ...passed 00:09:54.274 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-10-01 12:29:36.743709] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:09:54.274 [2024-10-01 12:29:36.743897] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:09:54.274 passed 00:09:54.274 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:09:54.274 Test: lun_append_task_null_lun_not_supported ...passed 00:09:54.274 Test: lun_execute_scsi_task_pending ...passed 00:09:54.274 Test: lun_execute_scsi_task_complete ...passed 00:09:54.274 Test: lun_execute_scsi_task_resize ...passed 00:09:54.274 Test: lun_destruct_success ...passed 00:09:54.274 Test: lun_construct_null_ctx ...passed 00:09:54.274 Test: lun_construct_success ...passed 00:09:54.274 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:09:54.274 Test: lun_reset_task_suspend_scsi_task ...[2024-10-01 12:29:36.744086] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:09:54.274 passed 00:09:54.274 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:09:54.274 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:09:54.274 00:09:54.274 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.274 suites 1 1 n/a 0 0 00:09:54.274 tests 18 18 18 0 0 00:09:54.274 asserts 153 153 153 0 n/a 00:09:54.274 00:09:54.274 Elapsed time = 0.001 seconds 00:09:54.274 12:29:36 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:09:54.274 00:09:54.274 00:09:54.274 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.274 http://cunit.sourceforge.net/ 00:09:54.274 00:09:54.274 00:09:54.274 Suite: scsi_suite 00:09:54.274 Test: scsi_init ...passed 00:09:54.274 00:09:54.274 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.274 suites 1 1 n/a 0 0 00:09:54.274 tests 1 1 1 0 0 00:09:54.274 asserts 1 1 1 0 n/a 00:09:54.274 00:09:54.274 Elapsed time = 0.000 seconds 00:09:54.534 12:29:36 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:09:54.534 00:09:54.534 00:09:54.534 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.534 http://cunit.sourceforge.net/ 00:09:54.534 00:09:54.534 00:09:54.534 Suite: translation_suite 00:09:54.534 Test: mode_select_6_test ...passed 00:09:54.534 Test: mode_select_6_test2 ...passed 00:09:54.534 Test: mode_sense_6_test ...passed 00:09:54.534 Test: mode_sense_10_test ...passed 00:09:54.534 Test: inquiry_evpd_test ...passed 00:09:54.534 Test: inquiry_standard_test ...passed 00:09:54.534 Test: inquiry_overflow_test ...passed 00:09:54.534 Test: task_complete_test ...passed 00:09:54.534 Test: lba_range_test ...passed 00:09:54.534 Test: xfer_len_test ...[2024-10-01 12:29:36.826964] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:09:54.534 passed 00:09:54.534 Test: xfer_test ...passed 00:09:54.534 Test: scsi_name_padding_test ...passed 00:09:54.534 Test: get_dif_ctx_test ...passed 00:09:54.534 Test: unmap_split_test ...passed 00:09:54.534 00:09:54.534 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.534 suites 1 1 n/a 0 0 00:09:54.534 tests 14 14 14 0 0 00:09:54.534 asserts 1200 1200 1200 0 n/a 00:09:54.534 00:09:54.534 Elapsed time = 0.005 seconds 00:09:54.534 12:29:36 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:09:54.534 00:09:54.534 00:09:54.534 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.534 http://cunit.sourceforge.net/ 00:09:54.534 00:09:54.534 00:09:54.534 Suite: reservation_suite 00:09:54.534 Test: test_reservation_register ...[2024-10-01 12:29:36.863639] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:54.534 passed 00:09:54.534 Test: test_reservation_reserve ...passed 00:09:54.534 Test: test_reservation_preempt_non_all_regs ...[2024-10-01 12:29:36.863978] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:54.534 [2024-10-01 12:29:36.864059] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:09:54.534 [2024-10-01 12:29:36.864169] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:09:54.534 [2024-10-01 12:29:36.864250] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:54.534 passed 00:09:54.534 Test: test_reservation_preempt_all_regs ...[2024-10-01 12:29:36.864335] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:09:54.534 [2024-10-01 12:29:36.864469] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:54.534 passed 00:09:54.534 Test: test_reservation_cmds_conflict ...[2024-10-01 12:29:36.864611] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:54.534 [2024-10-01 12:29:36.864686] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:09:54.534 [2024-10-01 12:29:36.864738] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:54.534 [2024-10-01 12:29:36.864779] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:54.534 [2024-10-01 12:29:36.864830] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:54.534 [2024-10-01 12:29:36.864868] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:54.534 passed 00:09:54.534 Test: test_scsi2_reserve_release ...passed 00:09:54.534 Test: test_pr_with_scsi2_reserve_release ...passed 00:09:54.534 00:09:54.534 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.534 suites 1 1 n/a 0 0 00:09:54.534 tests 7 7 7 0 0 00:09:54.534 asserts 257 257 257 0 n/a 00:09:54.534 00:09:54.534 Elapsed time = 0.002 seconds 00:09:54.534 [2024-10-01 12:29:36.864963] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:54.534 00:09:54.534 real 0m0.218s 00:09:54.534 user 0m0.136s 00:09:54.534 sys 0m0.083s 00:09:54.534 12:29:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.534 12:29:36 -- common/autotest_common.sh@10 -- # set +x 00:09:54.534 ************************************ 00:09:54.534 END TEST unittest_scsi 00:09:54.534 ************************************ 00:09:54.534 12:29:36 -- unit/unittest.sh@276 -- # uname -s 00:09:54.534 12:29:36 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:09:54.534 12:29:36 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:09:54.534 12:29:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:54.534 12:29:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:54.534 12:29:36 -- common/autotest_common.sh@10 -- # set +x 00:09:54.534 ************************************ 00:09:54.534 START TEST unittest_sock 00:09:54.534 ************************************ 00:09:54.534 12:29:36 -- common/autotest_common.sh@1104 -- # unittest_sock 00:09:54.534 12:29:36 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:09:54.534 00:09:54.534 00:09:54.534 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.534 http://cunit.sourceforge.net/ 00:09:54.534 00:09:54.534 00:09:54.534 Suite: sock 00:09:54.534 Test: posix_sock ...passed 00:09:54.534 Test: ut_sock ...passed 00:09:54.534 Test: posix_sock_group ...passed 00:09:54.534 Test: ut_sock_group ...passed 00:09:54.534 Test: posix_sock_group_fairness ...passed 00:09:54.534 Test: _posix_sock_close ...passed 00:09:54.534 Test: sock_get_default_opts ...passed 00:09:54.534 Test: ut_sock_impl_get_set_opts ...passed 00:09:54.534 Test: posix_sock_impl_get_set_opts ...passed 00:09:54.534 Test: ut_sock_map ...passed 00:09:54.534 Test: override_impl_opts ...passed 00:09:54.534 Test: ut_sock_group_get_ctx ...passed 00:09:54.534 00:09:54.534 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.534 suites 1 1 n/a 0 0 00:09:54.534 tests 12 12 12 0 0 00:09:54.534 asserts 349 349 349 0 n/a 00:09:54.534 00:09:54.534 Elapsed time = 0.008 seconds 00:09:54.534 12:29:37 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:09:54.794 00:09:54.794 00:09:54.794 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.794 http://cunit.sourceforge.net/ 00:09:54.794 00:09:54.794 00:09:54.794 Suite: posix 00:09:54.794 Test: flush ...passed 00:09:54.794 00:09:54.794 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.794 suites 1 1 n/a 0 0 00:09:54.794 tests 1 1 1 0 0 00:09:54.794 asserts 28 28 28 0 n/a 00:09:54.794 00:09:54.794 Elapsed time = 0.000 seconds 00:09:54.794 12:29:37 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:54.794 00:09:54.794 real 0m0.127s 00:09:54.794 user 0m0.042s 00:09:54.794 sys 0m0.063s 00:09:54.794 12:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.794 12:29:37 -- common/autotest_common.sh@10 -- # set +x 00:09:54.794 ************************************ 00:09:54.794 END TEST unittest_sock 00:09:54.794 ************************************ 00:09:54.794 12:29:37 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:54.794 12:29:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:54.794 12:29:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:54.794 12:29:37 -- common/autotest_common.sh@10 -- # set +x 00:09:54.794 ************************************ 00:09:54.794 START TEST unittest_thread 00:09:54.794 ************************************ 00:09:54.794 12:29:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:54.794 00:09:54.794 00:09:54.794 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.794 http://cunit.sourceforge.net/ 00:09:54.794 00:09:54.794 00:09:54.794 Suite: io_channel 00:09:54.794 Test: thread_alloc ...passed 00:09:54.794 Test: thread_send_msg ...passed 00:09:54.794 Test: thread_poller ...passed 00:09:54.794 Test: poller_pause ...passed 00:09:54.794 Test: thread_for_each ...passed 00:09:54.794 Test: for_each_channel_remove ...passed 00:09:54.794 Test: for_each_channel_unreg ...[2024-10-01 12:29:37.223989] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffc1ec5e2d0 already registered (old:0x613000000200 new:0x6130000003c0) 00:09:54.794 passed 00:09:54.794 Test: thread_name ...passed 00:09:54.794 Test: channel ...[2024-10-01 12:29:37.226847] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x562bd736d0e0 00:09:54.794 passed 00:09:54.794 Test: channel_destroy_races ...passed 00:09:54.794 Test: thread_exit_test ...[2024-10-01 12:29:37.230405] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:09:54.794 passed 00:09:54.794 Test: thread_update_stats_test ...passed 00:09:54.794 Test: nested_channel ...passed 00:09:54.794 Test: device_unregister_and_thread_exit_race ...passed 00:09:54.794 Test: cache_closest_timed_poller ...passed 00:09:54.794 Test: multi_timed_pollers_have_same_expiration ...passed 00:09:54.794 Test: io_device_lookup ...passed 00:09:54.794 Test: spdk_spin ...[2024-10-01 12:29:37.237862] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:54.794 [2024-10-01 12:29:37.237905] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1ec5e2c0 00:09:54.794 [2024-10-01 12:29:37.237993] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:54.794 [2024-10-01 12:29:37.239161] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:54.794 [2024-10-01 12:29:37.239210] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1ec5e2c0 00:09:54.794 [2024-10-01 12:29:37.239242] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:54.794 [2024-10-01 12:29:37.239279] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1ec5e2c0 00:09:54.794 [2024-10-01 12:29:37.239306] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:54.794 [2024-10-01 12:29:37.239344] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1ec5e2c0 00:09:54.794 [2024-10-01 12:29:37.239370] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:09:54.794 [2024-10-01 12:29:37.239417] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1ec5e2c0 00:09:54.794 passed 00:09:54.794 Test: for_each_channel_and_thread_exit_race ...passed 00:09:54.794 Test: for_each_thread_and_thread_exit_race ...passed 00:09:54.794 00:09:54.794 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.794 suites 1 1 n/a 0 0 00:09:54.794 tests 20 20 20 0 0 00:09:54.794 asserts 409 409 409 0 n/a 00:09:54.794 00:09:54.794 Elapsed time = 0.038 seconds 00:09:54.794 00:09:54.794 real 0m0.094s 00:09:54.794 user 0m0.053s 00:09:54.794 sys 0m0.041s 00:09:54.794 12:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.794 12:29:37 -- common/autotest_common.sh@10 -- # set +x 00:09:54.794 ************************************ 00:09:54.795 END TEST unittest_thread 00:09:54.795 ************************************ 00:09:55.054 12:29:37 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:55.054 12:29:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:55.054 12:29:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.054 12:29:37 -- common/autotest_common.sh@10 -- # set +x 00:09:55.054 ************************************ 00:09:55.054 START TEST unittest_iobuf 00:09:55.054 ************************************ 00:09:55.054 12:29:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:55.054 00:09:55.054 00:09:55.054 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.054 http://cunit.sourceforge.net/ 00:09:55.054 00:09:55.054 00:09:55.054 Suite: io_channel 00:09:55.054 Test: iobuf ...passed 00:09:55.054 Test: iobuf_cache ...[2024-10-01 12:29:37.381386] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:55.054 [2024-10-01 12:29:37.381697] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:55.054 [2024-10-01 12:29:37.381844] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:09:55.054 [2024-10-01 12:29:37.381899] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:55.054 [2024-10-01 12:29:37.381968] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:55.054 [2024-10-01 12:29:37.382016] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:55.054 passed 00:09:55.054 00:09:55.054 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.054 suites 1 1 n/a 0 0 00:09:55.054 tests 2 2 2 0 0 00:09:55.054 asserts 107 107 107 0 n/a 00:09:55.054 00:09:55.054 Elapsed time = 0.006 seconds 00:09:55.054 00:09:55.054 real 0m0.057s 00:09:55.054 user 0m0.017s 00:09:55.054 sys 0m0.040s 00:09:55.054 12:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.054 ************************************ 00:09:55.054 END TEST unittest_iobuf 00:09:55.054 ************************************ 00:09:55.055 12:29:37 -- common/autotest_common.sh@10 -- # set +x 00:09:55.055 12:29:37 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:09:55.055 12:29:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:55.055 12:29:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.055 12:29:37 -- common/autotest_common.sh@10 -- # set +x 00:09:55.055 ************************************ 00:09:55.055 START TEST unittest_util 00:09:55.055 ************************************ 00:09:55.055 12:29:37 -- common/autotest_common.sh@1104 -- # unittest_util 00:09:55.055 12:29:37 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:09:55.055 00:09:55.055 00:09:55.055 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.055 http://cunit.sourceforge.net/ 00:09:55.055 00:09:55.055 00:09:55.055 Suite: base64 00:09:55.055 Test: test_base64_get_encoded_strlen ...passed 00:09:55.055 Test: test_base64_get_decoded_len ...passed 00:09:55.055 Test: test_base64_encode ...passed 00:09:55.055 Test: test_base64_decode ...passed 00:09:55.055 Test: test_base64_urlsafe_encode ...passed 00:09:55.055 Test: test_base64_urlsafe_decode ...passed 00:09:55.055 00:09:55.055 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.055 suites 1 1 n/a 0 0 00:09:55.055 tests 6 6 6 0 0 00:09:55.055 asserts 112 112 112 0 n/a 00:09:55.055 00:09:55.055 Elapsed time = 0.000 seconds 00:09:55.055 12:29:37 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:09:55.055 00:09:55.055 00:09:55.055 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.055 http://cunit.sourceforge.net/ 00:09:55.055 00:09:55.055 00:09:55.055 Suite: bit_array 00:09:55.055 Test: test_1bit ...passed 00:09:55.055 Test: test_64bit ...passed 00:09:55.055 Test: test_find ...passed 00:09:55.055 Test: test_resize ...passed 00:09:55.055 Test: test_errors ...passed 00:09:55.055 Test: test_count ...passed 00:09:55.055 Test: test_mask_store_load ...passed 00:09:55.055 Test: test_mask_clear ...passed 00:09:55.055 00:09:55.055 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.055 suites 1 1 n/a 0 0 00:09:55.055 tests 8 8 8 0 0 00:09:55.055 asserts 5075 5075 5075 0 n/a 00:09:55.055 00:09:55.055 Elapsed time = 0.002 seconds 00:09:55.055 12:29:37 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:09:55.314 00:09:55.314 00:09:55.314 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.314 http://cunit.sourceforge.net/ 00:09:55.314 00:09:55.314 00:09:55.314 Suite: cpuset 00:09:55.314 Test: test_cpuset ...passed 00:09:55.314 Test: test_cpuset_parse ...[2024-10-01 12:29:37.598983] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:09:55.314 [2024-10-01 12:29:37.599417] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:09:55.314 [2024-10-01 12:29:37.599556] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:09:55.314 [2024-10-01 12:29:37.599683] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:09:55.314 [2024-10-01 12:29:37.599754] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:09:55.314 [2024-10-01 12:29:37.599821] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:09:55.314 [2024-10-01 12:29:37.599892] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:09:55.314 [2024-10-01 12:29:37.599974] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:09:55.314 passed 00:09:55.314 Test: test_cpuset_fmt ...passed 00:09:55.314 00:09:55.314 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.314 suites 1 1 n/a 0 0 00:09:55.314 tests 3 3 3 0 0 00:09:55.314 asserts 65 65 65 0 n/a 00:09:55.314 00:09:55.314 Elapsed time = 0.003 seconds 00:09:55.314 12:29:37 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:09:55.314 00:09:55.314 00:09:55.314 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.314 http://cunit.sourceforge.net/ 00:09:55.314 00:09:55.314 00:09:55.314 Suite: crc16 00:09:55.314 Test: test_crc16_t10dif ...passed 00:09:55.314 Test: test_crc16_t10dif_seed ...passed 00:09:55.314 Test: test_crc16_t10dif_copy ...passed 00:09:55.314 00:09:55.314 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.314 suites 1 1 n/a 0 0 00:09:55.314 tests 3 3 3 0 0 00:09:55.314 asserts 5 5 5 0 n/a 00:09:55.314 00:09:55.314 Elapsed time = 0.000 seconds 00:09:55.314 12:29:37 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:09:55.314 00:09:55.314 00:09:55.314 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.314 http://cunit.sourceforge.net/ 00:09:55.314 00:09:55.314 00:09:55.314 Suite: crc32_ieee 00:09:55.314 Test: test_crc32_ieee ...passed 00:09:55.314 00:09:55.314 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.314 suites 1 1 n/a 0 0 00:09:55.314 tests 1 1 1 0 0 00:09:55.314 asserts 1 1 1 0 n/a 00:09:55.314 00:09:55.314 Elapsed time = 0.000 seconds 00:09:55.314 12:29:37 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:09:55.314 00:09:55.314 00:09:55.314 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.314 http://cunit.sourceforge.net/ 00:09:55.314 00:09:55.314 00:09:55.314 Suite: crc32c 00:09:55.314 Test: test_crc32c ...passed 00:09:55.314 Test: test_crc32c_nvme ...passed 00:09:55.314 00:09:55.314 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.314 suites 1 1 n/a 0 0 00:09:55.314 tests 2 2 2 0 0 00:09:55.314 asserts 16 16 16 0 n/a 00:09:55.314 00:09:55.314 Elapsed time = 0.000 seconds 00:09:55.314 12:29:37 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:09:55.314 00:09:55.314 00:09:55.314 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.314 http://cunit.sourceforge.net/ 00:09:55.314 00:09:55.314 00:09:55.314 Suite: crc64 00:09:55.314 Test: test_crc64_nvme ...passed 00:09:55.314 00:09:55.314 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.314 suites 1 1 n/a 0 0 00:09:55.314 tests 1 1 1 0 0 00:09:55.314 asserts 4 4 4 0 n/a 00:09:55.314 00:09:55.314 Elapsed time = 0.000 seconds 00:09:55.314 12:29:37 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:09:55.314 00:09:55.314 00:09:55.314 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.314 http://cunit.sourceforge.net/ 00:09:55.314 00:09:55.314 00:09:55.314 Suite: string 00:09:55.314 Test: test_parse_ip_addr ...passed 00:09:55.315 Test: test_str_chomp ...passed 00:09:55.315 Test: test_parse_capacity ...passed 00:09:55.315 Test: test_sprintf_append_realloc ...passed 00:09:55.315 Test: test_strtol ...passed 00:09:55.315 Test: test_strtoll ...passed 00:09:55.315 Test: test_strarray ...passed 00:09:55.315 Test: test_strcpy_replace ...passed 00:09:55.315 00:09:55.315 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.315 suites 1 1 n/a 0 0 00:09:55.315 tests 8 8 8 0 0 00:09:55.315 asserts 161 161 161 0 n/a 00:09:55.315 00:09:55.315 Elapsed time = 0.001 seconds 00:09:55.315 12:29:37 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:09:55.576 00:09:55.576 00:09:55.576 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.576 http://cunit.sourceforge.net/ 00:09:55.576 00:09:55.576 00:09:55.576 Suite: dif 00:09:55.576 Test: dif_generate_and_verify_test ...[2024-10-01 12:29:37.861130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:55.576 [2024-10-01 12:29:37.861891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:55.576 [2024-10-01 12:29:37.862312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:55.576 [2024-10-01 12:29:37.862730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:55.576 [2024-10-01 12:29:37.863139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:55.576 [2024-10-01 12:29:37.863563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:55.576 passed 00:09:55.576 Test: dif_disable_check_test ...[2024-10-01 12:29:37.865205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:55.576 [2024-10-01 12:29:37.865599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:55.576 [2024-10-01 12:29:37.865878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:55.576 passed 00:09:55.576 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-10-01 12:29:37.866871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:09:55.576 [2024-10-01 12:29:37.867170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:09:55.576 [2024-10-01 12:29:37.867482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:09:55.576 [2024-10-01 12:29:37.867836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:09:55.576 [2024-10-01 12:29:37.868167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:55.576 [2024-10-01 12:29:37.868481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:55.576 [2024-10-01 12:29:37.868795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:55.576 [2024-10-01 12:29:37.869088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:55.576 [2024-10-01 12:29:37.869390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:55.576 [2024-10-01 12:29:37.869716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:55.576 [2024-10-01 12:29:37.870035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:55.576 passed 00:09:55.576 Test: dif_apptag_mask_test ...[2024-10-01 12:29:37.870348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:55.576 [2024-10-01 12:29:37.870642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:55.576 passed 00:09:55.576 Test: dif_sec_512_md_0_error_test ...[2024-10-01 12:29:37.870846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:55.576 passed 00:09:55.576 Test: dif_sec_4096_md_0_error_test ...passed 00:09:55.576 Test: dif_sec_4100_md_128_error_test ...[2024-10-01 12:29:37.870912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:55.576 [2024-10-01 12:29:37.870963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:55.576 [2024-10-01 12:29:37.871028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:09:55.576 passed 00:09:55.576 Test: dif_guard_seed_test ...passed 00:09:55.576 Test: dif_guard_value_test ...[2024-10-01 12:29:37.871077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:09:55.576 passed 00:09:55.576 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:09:55.576 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:09:55.576 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:55.576 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:55.576 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:55.576 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:09:55.576 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:55.576 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:55.576 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:09:55.576 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:55.576 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:09:55.576 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:09:55.576 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:55.576 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:55.576 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:55.576 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:55.576 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:55.576 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:55.576 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-10-01 12:29:37.899854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=ff4c, Actual=fd4c 00:09:55.577 [2024-10-01 12:29:37.901364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fc21, Actual=fe21 00:09:55.577 [2024-10-01 12:29:37.902849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.904346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.905854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=260 00:09:55.577 [2024-10-01 12:29:37.907337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=260 00:09:55.577 [2024-10-01 12:29:37.908840] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=cf1f 00:09:55.577 [2024-10-01 12:29:37.910211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fe21, Actual=9af4 00:09:55.577 [2024-10-01 12:29:37.911593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab751ed, Actual=1ab753ed 00:09:55.577 [2024-10-01 12:29:37.913091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=38574460, Actual=38574660 00:09:55.577 [2024-10-01 12:29:37.914595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.916084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.917593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=260 00:09:55.577 [2024-10-01 12:29:37.919071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=260 00:09:55.577 [2024-10-01 12:29:37.920574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=8a42f469 00:09:55.577 [2024-10-01 12:29:37.921959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=38574660, Actual=e01d04f6 00:09:55.577 [2024-10-01 12:29:37.923352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc22d3, Actual=a576a7728ecc20d3 00:09:55.577 [2024-10-01 12:29:37.924854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=88010a2d4837a066, Actual=88010a2d4837a266 00:09:55.577 [2024-10-01 12:29:37.926333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.927816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.929312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=2000060 00:09:55.577 [2024-10-01 12:29:37.930801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=2000060 00:09:55.577 [2024-10-01 12:29:37.932317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=ec06af610e253032 00:09:55.577 [2024-10-01 12:29:37.933700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=88010a2d4837a266, Actual=861af84f742f36b0 00:09:55.577 passed 00:09:55.577 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-10-01 12:29:37.934553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:09:55.577 [2024-10-01 12:29:37.934745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:09:55.577 [2024-10-01 12:29:37.934938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.935129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.935339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.577 [2024-10-01 12:29:37.935529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.577 [2024-10-01 12:29:37.935723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cf1f 00:09:55.577 [2024-10-01 12:29:37.935894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=9af4 00:09:55.577 [2024-10-01 12:29:37.936063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab751ed, Actual=1ab753ed 00:09:55.577 [2024-10-01 12:29:37.936260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574460, Actual=38574660 00:09:55.577 [2024-10-01 12:29:37.936464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.936659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.936865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.577 [2024-10-01 12:29:37.937047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.577 [2024-10-01 12:29:37.937239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8a42f469 00:09:55.577 [2024-10-01 12:29:37.937394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=e01d04f6 00:09:55.577 [2024-10-01 12:29:37.937567] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc22d3, Actual=a576a7728ecc20d3 00:09:55.577 [2024-10-01 12:29:37.937759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a066, Actual=88010a2d4837a266 00:09:55.577 [2024-10-01 12:29:37.937952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.938135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.938330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.577 [2024-10-01 12:29:37.938519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.577 [2024-10-01 12:29:37.938716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ec06af610e253032 00:09:55.577 [2024-10-01 12:29:37.938887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=861af84f742f36b0 00:09:55.577 passed 00:09:55.577 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-10-01 12:29:37.939086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:09:55.577 [2024-10-01 12:29:37.939286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:09:55.577 [2024-10-01 12:29:37.939479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.939672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.939882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.577 [2024-10-01 12:29:37.940071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.577 [2024-10-01 12:29:37.940267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cf1f 00:09:55.577 [2024-10-01 12:29:37.940437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=9af4 00:09:55.577 [2024-10-01 12:29:37.940599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab751ed, Actual=1ab753ed 00:09:55.577 [2024-10-01 12:29:37.940792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574460, Actual=38574660 00:09:55.577 [2024-10-01 12:29:37.940982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.941195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.941404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.577 [2024-10-01 12:29:37.941589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.577 [2024-10-01 12:29:37.941790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8a42f469 00:09:55.577 [2024-10-01 12:29:37.941960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=e01d04f6 00:09:55.577 [2024-10-01 12:29:37.942135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc22d3, Actual=a576a7728ecc20d3 00:09:55.577 [2024-10-01 12:29:37.942323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a066, Actual=88010a2d4837a266 00:09:55.577 [2024-10-01 12:29:37.942517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.942712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.577 [2024-10-01 12:29:37.942909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.577 [2024-10-01 12:29:37.943097] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.577 [2024-10-01 12:29:37.943302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ec06af610e253032 00:09:55.577 [2024-10-01 12:29:37.943464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=861af84f742f36b0 00:09:55.577 passed 00:09:55.577 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-10-01 12:29:37.943662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:09:55.578 [2024-10-01 12:29:37.943863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:09:55.578 [2024-10-01 12:29:37.944077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.944273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.944486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.944675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.944869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cf1f 00:09:55.578 [2024-10-01 12:29:37.945029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=9af4 00:09:55.578 [2024-10-01 12:29:37.945197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab751ed, Actual=1ab753ed 00:09:55.578 [2024-10-01 12:29:37.945379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574460, Actual=38574660 00:09:55.578 [2024-10-01 12:29:37.945584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.945778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.945962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.946161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.946366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8a42f469 00:09:55.578 [2024-10-01 12:29:37.946522] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=e01d04f6 00:09:55.578 [2024-10-01 12:29:37.946690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc22d3, Actual=a576a7728ecc20d3 00:09:55.578 [2024-10-01 12:29:37.946884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a066, Actual=88010a2d4837a266 00:09:55.578 [2024-10-01 12:29:37.947067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.947256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.947449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.578 [2024-10-01 12:29:37.947643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.578 [2024-10-01 12:29:37.947847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ec06af610e253032 00:09:55.578 [2024-10-01 12:29:37.948023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=861af84f742f36b0 00:09:55.578 passed 00:09:55.578 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-10-01 12:29:37.948224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:09:55.578 [2024-10-01 12:29:37.948410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:09:55.578 [2024-10-01 12:29:37.948602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.948807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.949012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.949200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.949387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cf1f 00:09:55.578 [2024-10-01 12:29:37.949553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=9af4 00:09:55.578 passed 00:09:55.578 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-10-01 12:29:37.949755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab751ed, Actual=1ab753ed 00:09:55.578 [2024-10-01 12:29:37.949947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574460, Actual=38574660 00:09:55.578 [2024-10-01 12:29:37.950145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.950321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.950513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.950702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.950895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8a42f469 00:09:55.578 [2024-10-01 12:29:37.951070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=e01d04f6 00:09:55.578 [2024-10-01 12:29:37.951271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc22d3, Actual=a576a7728ecc20d3 00:09:55.578 [2024-10-01 12:29:37.951465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a066, Actual=88010a2d4837a266 00:09:55.578 [2024-10-01 12:29:37.951648] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.951841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.952037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.578 [2024-10-01 12:29:37.952238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.578 [2024-10-01 12:29:37.952437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ec06af610e253032 00:09:55.578 [2024-10-01 12:29:37.952604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=861af84f742f36b0 00:09:55.578 passed 00:09:55.578 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-10-01 12:29:37.952800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:09:55.578 [2024-10-01 12:29:37.952993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:09:55.578 [2024-10-01 12:29:37.953192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.953385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.953592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.953779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.953973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cf1f 00:09:55.578 [2024-10-01 12:29:37.954129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=9af4 00:09:55.578 passed 00:09:55.578 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-10-01 12:29:37.954316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab751ed, Actual=1ab753ed 00:09:55.578 [2024-10-01 12:29:37.954505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574460, Actual=38574660 00:09:55.578 [2024-10-01 12:29:37.954709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.954898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.955092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.955283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.578 [2024-10-01 12:29:37.955488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8a42f469 00:09:55.578 [2024-10-01 12:29:37.955644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=e01d04f6 00:09:55.578 [2024-10-01 12:29:37.955850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc22d3, Actual=a576a7728ecc20d3 00:09:55.578 [2024-10-01 12:29:37.956052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a066, Actual=88010a2d4837a266 00:09:55.578 [2024-10-01 12:29:37.956251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.956433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.578 [2024-10-01 12:29:37.956626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.578 [2024-10-01 12:29:37.956803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.578 [2024-10-01 12:29:37.957000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ec06af610e253032 00:09:55.578 [2024-10-01 12:29:37.957169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=861af84f742f36b0 00:09:55.578 passed 00:09:55.578 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:09:55.578 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:55.578 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:55.578 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:55.579 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:55.579 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:55.579 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:55.579 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:55.579 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:55.579 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-10-01 12:29:37.984134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=ff4c, Actual=fd4c 00:09:55.579 [2024-10-01 12:29:37.984823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=5113, Actual=5313 00:09:55.579 [2024-10-01 12:29:37.985500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:37.986170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:37.986848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=260 00:09:55.579 [2024-10-01 12:29:37.987514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=260 00:09:55.579 [2024-10-01 12:29:37.988189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=cf1f 00:09:55.579 [2024-10-01 12:29:37.988867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=e4c1, Actual=8014 00:09:55.579 [2024-10-01 12:29:37.989544] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab751ed, Actual=1ab753ed 00:09:55.579 [2024-10-01 12:29:37.990214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=da9d8b9c, Actual=da9d899c 00:09:55.579 [2024-10-01 12:29:37.990888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:37.991569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:37.992258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=260 00:09:55.579 [2024-10-01 12:29:37.992944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=260 00:09:55.579 [2024-10-01 12:29:37.993619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=8a42f469 00:09:55.579 [2024-10-01 12:29:37.994292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=741688fe, Actual=ac5cca68 00:09:55.579 [2024-10-01 12:29:37.994952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc22d3, Actual=a576a7728ecc20d3 00:09:55.579 [2024-10-01 12:29:37.995649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=bdbf797dc979caab, Actual=bdbf797dc979c8ab 00:09:55.579 [2024-10-01 12:29:37.996335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:37.997017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:37.997678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=2000060 00:09:55.579 [2024-10-01 12:29:37.998357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=2000060 00:09:55.579 [2024-10-01 12:29:37.999030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=ec06af610e253032 00:09:55.579 [2024-10-01 12:29:37.999725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=4c87c8f68d0ed55f, Actual=429c3a94b1164189 00:09:55.579 passed 00:09:55.579 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-10-01 12:29:37.999970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:09:55.579 [2024-10-01 12:29:38.000140] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=98c4, Actual=9ac4 00:09:55.579 [2024-10-01 12:29:38.000316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:38.000482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:38.000665] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.579 [2024-10-01 12:29:38.000843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.579 [2024-10-01 12:29:38.001014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=cf1f 00:09:55.579 [2024-10-01 12:29:38.001185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=49c3 00:09:55.579 [2024-10-01 12:29:38.001355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab751ed, Actual=1ab753ed 00:09:55.579 [2024-10-01 12:29:38.001520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=442d43ce, Actual=442d41ce 00:09:55.579 [2024-10-01 12:29:38.001694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:38.001866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:38.002032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.579 [2024-10-01 12:29:38.002203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:09:55.579 [2024-10-01 12:29:38.002376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8a42f469 00:09:55.579 [2024-10-01 12:29:38.002562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=32ec023a 00:09:55.579 [2024-10-01 12:29:38.002750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc22d3, Actual=a576a7728ecc20d3 00:09:55.579 [2024-10-01 12:29:38.002907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c95e9b9012ad2f56, Actual=c95e9b9012ad2d56 00:09:55.579 [2024-10-01 12:29:38.003086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:38.003252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:09:55.579 [2024-10-01 12:29:38.003427] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.579 [2024-10-01 12:29:38.003582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:09:55.579 [2024-10-01 12:29:38.003767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ec06af610e253032 00:09:55.579 [2024-10-01 12:29:38.003947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=f67bdcf56eeac773 00:09:55.579 passed 00:09:55.579 Test: dix_sec_512_md_0_error ...[2024-10-01 12:29:38.003993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:55.579 passed 00:09:55.579 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:09:55.579 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:55.579 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:55.579 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:55.579 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:55.579 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:55.579 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:55.579 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:55.579 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:55.579 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-10-01 12:29:38.030430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd44, Actual=fd4c 00:09:55.579 [2024-10-01 12:29:38.031108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=9d7d, Actual=9d75 00:09:55.579 [2024-10-01 12:29:38.031768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=80 00:09:55.579 [2024-10-01 12:29:38.032459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=80 00:09:55.579 [2024-10-01 12:29:38.033138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=54 00:09:55.579 [2024-10-01 12:29:38.033817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=54 00:09:55.579 [2024-10-01 12:29:38.034483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=4a79 00:09:55.579 [2024-10-01 12:29:38.035164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4e97, Actual=6f55 00:09:55.579 [2024-10-01 12:29:38.035828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753e5, Actual=1ab753ed 00:09:55.579 [2024-10-01 12:29:38.036503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=c4884df2, Actual=c4884dfa 00:09:55.579 [2024-10-01 12:29:38.037176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=80 00:09:55.579 [2024-10-01 12:29:38.037847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=80 00:09:55.579 [2024-10-01 12:29:38.038510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=54 00:09:55.579 [2024-10-01 12:29:38.039176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=54 00:09:55.579 [2024-10-01 12:29:38.039836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=7b3013f8 00:09:55.579 [2024-10-01 12:29:38.040518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=2b267559, Actual=6f1ebd20 00:09:55.579 [2024-10-01 12:29:38.041210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20db, Actual=a576a7728ecc20d3 00:09:55.579 [2024-10-01 12:29:38.041869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=75d0be29b686d9c8, Actual=75d0be29b686d9c0 00:09:55.580 [2024-10-01 12:29:38.042540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=80 00:09:55.580 [2024-10-01 12:29:38.043206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=80 00:09:55.580 [2024-10-01 12:29:38.043894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=80000005c 00:09:55.580 [2024-10-01 12:29:38.044563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=80000005c 00:09:55.580 [2024-10-01 12:29:38.045239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=62b9ba22afacb4c5 00:09:55.580 [2024-10-01 12:29:38.045899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=cd8425887035b4fd, Actual=399d3664805432c 00:09:55.580 passed 00:09:55.580 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-10-01 12:29:38.046134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd44, Actual=fd4c 00:09:55.580 [2024-10-01 12:29:38.046306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fefc, Actual=fef4 00:09:55.580 [2024-10-01 12:29:38.046478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:09:55.580 [2024-10-01 12:29:38.046646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:09:55.580 [2024-10-01 12:29:38.046847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:09:55.580 [2024-10-01 12:29:38.047018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:09:55.580 [2024-10-01 12:29:38.047192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=4a79 00:09:55.580 [2024-10-01 12:29:38.047359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=cd4 00:09:55.580 [2024-10-01 12:29:38.047533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e5, Actual=1ab753ed 00:09:55.580 [2024-10-01 12:29:38.047706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5087807, Actual=508780f 00:09:55.580 [2024-10-01 12:29:38.047904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:09:55.580 [2024-10-01 12:29:38.048079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:09:55.580 [2024-10-01 12:29:38.048248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:09:55.580 [2024-10-01 12:29:38.048414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=50 00:09:55.580 [2024-10-01 12:29:38.048582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7b3013f8 00:09:55.580 [2024-10-01 12:29:38.048753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ae9e88d5 00:09:55.580 [2024-10-01 12:29:38.048931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20db, Actual=a576a7728ecc20d3 00:09:55.580 [2024-10-01 12:29:38.049116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8032b1ba90695d97, Actual=8032b1ba90695d9f 00:09:55.580 [2024-10-01 12:29:38.049281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:09:55.580 [2024-10-01 12:29:38.049455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=80 00:09:55.580 [2024-10-01 12:29:38.049617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:09:55.580 [2024-10-01 12:29:38.049791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000058 00:09:55.580 [2024-10-01 12:29:38.049965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=62b9ba22afacb4c5 00:09:55.580 [2024-10-01 12:29:38.050134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=f67bdcf56eeac773 00:09:55.580 passed 00:09:55.580 Test: set_md_interleave_iovs_test ...passed 00:09:55.580 Test: set_md_interleave_iovs_split_test ...passed 00:09:55.580 Test: dif_generate_stream_pi_16_test ...passed 00:09:55.580 Test: dif_generate_stream_test ...passed 00:09:55.580 Test: set_md_interleave_iovs_alignment_test ...passed 00:09:55.580 Test: dif_generate_split_test ...[2024-10-01 12:29:38.054941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:09:55.580 passed 00:09:55.580 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:09:55.580 Test: dif_verify_split_test ...passed 00:09:55.580 Test: dif_verify_stream_multi_segments_test ...passed 00:09:55.580 Test: update_crc32c_pi_16_test ...passed 00:09:55.580 Test: update_crc32c_test ...passed 00:09:55.580 Test: dif_update_crc32c_split_test ...passed 00:09:55.580 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:09:55.580 Test: get_range_with_md_test ...passed 00:09:55.580 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:09:55.580 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:09:55.580 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:55.580 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:09:55.580 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:09:55.580 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:55.580 Test: dif_generate_and_verify_unmap_test ...passed 00:09:55.580 00:09:55.580 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.580 suites 1 1 n/a 0 0 00:09:55.580 tests 79 79 79 0 0 00:09:55.580 asserts 3584 3584 3584 0 n/a 00:09:55.580 00:09:55.580 Elapsed time = 0.223 seconds 00:09:55.839 12:29:38 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:09:55.839 00:09:55.839 00:09:55.839 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.839 http://cunit.sourceforge.net/ 00:09:55.839 00:09:55.839 00:09:55.839 Suite: iov 00:09:55.839 Test: test_single_iov ...passed 00:09:55.839 Test: test_simple_iov ...passed 00:09:55.839 Test: test_complex_iov ...passed 00:09:55.839 Test: test_iovs_to_buf ...passed 00:09:55.839 Test: test_buf_to_iovs ...passed 00:09:55.839 Test: test_memset ...passed 00:09:55.839 Test: test_iov_one ...passed 00:09:55.839 Test: test_iov_xfer ...passed 00:09:55.839 00:09:55.839 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.839 suites 1 1 n/a 0 0 00:09:55.839 tests 8 8 8 0 0 00:09:55.839 asserts 156 156 156 0 n/a 00:09:55.839 00:09:55.839 Elapsed time = 0.000 seconds 00:09:55.839 12:29:38 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:09:55.839 00:09:55.839 00:09:55.839 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.839 http://cunit.sourceforge.net/ 00:09:55.839 00:09:55.839 00:09:55.839 Suite: math 00:09:55.839 Test: test_serial_number_arithmetic ...passed 00:09:55.839 Suite: erase 00:09:55.839 Test: test_memset_s ...passed 00:09:55.839 00:09:55.839 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.839 suites 2 2 n/a 0 0 00:09:55.839 tests 2 2 2 0 0 00:09:55.839 asserts 18 18 18 0 n/a 00:09:55.839 00:09:55.839 Elapsed time = 0.000 seconds 00:09:55.839 12:29:38 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:09:55.839 00:09:55.839 00:09:55.839 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.839 http://cunit.sourceforge.net/ 00:09:55.839 00:09:55.839 00:09:55.839 Suite: pipe 00:09:55.839 Test: test_create_destroy ...passed 00:09:55.839 Test: test_write_get_buffer ...passed 00:09:55.839 Test: test_write_advance ...passed 00:09:55.839 Test: test_read_get_buffer ...passed 00:09:55.839 Test: test_read_advance ...passed 00:09:55.839 Test: test_data ...passed 00:09:55.839 00:09:55.839 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.839 suites 1 1 n/a 0 0 00:09:55.839 tests 6 6 6 0 0 00:09:55.839 asserts 250 250 250 0 n/a 00:09:55.839 00:09:55.839 Elapsed time = 0.000 seconds 00:09:55.839 12:29:38 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:09:55.839 00:09:55.839 00:09:55.839 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.839 http://cunit.sourceforge.net/ 00:09:55.839 00:09:55.839 00:09:55.839 Suite: xor 00:09:55.839 Test: test_xor_gen ...passed 00:09:55.839 00:09:55.839 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.839 suites 1 1 n/a 0 0 00:09:55.839 tests 1 1 1 0 0 00:09:55.839 asserts 17 17 17 0 n/a 00:09:55.839 00:09:55.839 Elapsed time = 0.008 seconds 00:09:55.839 00:09:55.839 real 0m0.832s 00:09:55.839 user 0m0.520s 00:09:55.839 sys 0m0.319s 00:09:55.839 12:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.839 12:29:38 -- common/autotest_common.sh@10 -- # set +x 00:09:55.839 ************************************ 00:09:55.839 END TEST unittest_util 00:09:55.839 ************************************ 00:09:56.098 12:29:38 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:56.098 12:29:38 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:56.098 12:29:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:56.098 12:29:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:56.098 12:29:38 -- common/autotest_common.sh@10 -- # set +x 00:09:56.098 ************************************ 00:09:56.098 START TEST unittest_vhost 00:09:56.098 ************************************ 00:09:56.098 12:29:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:56.098 00:09:56.098 00:09:56.098 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.098 http://cunit.sourceforge.net/ 00:09:56.098 00:09:56.098 00:09:56.098 Suite: vhost_suite 00:09:56.098 Test: desc_to_iov_test ...[2024-10-01 12:29:38.429750] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:09:56.098 passed 00:09:56.098 Test: create_controller_test ...[2024-10-01 12:29:38.434280] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:56.098 [2024-10-01 12:29:38.434399] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:09:56.098 [2024-10-01 12:29:38.434527] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:56.098 [2024-10-01 12:29:38.434616] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:09:56.098 [2024-10-01 12:29:38.434679] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:09:56.099 [2024-10-01 12:29:38.434784] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-10-01 12:29:38.435692] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:09:56.099 passed 00:09:56.099 Test: session_find_by_vid_test ...passed 00:09:56.099 Test: remove_controller_test ...[2024-10-01 12:29:38.437550] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:09:56.099 passed 00:09:56.099 Test: vq_avail_ring_get_test ...passed 00:09:56.099 Test: vq_packed_ring_test ...passed 00:09:56.099 Test: vhost_blk_construct_test ...passed 00:09:56.099 00:09:56.099 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.099 suites 1 1 n/a 0 0 00:09:56.099 tests 7 7 7 0 0 00:09:56.099 asserts 145 145 145 0 n/a 00:09:56.099 00:09:56.099 Elapsed time = 0.012 seconds 00:09:56.099 00:09:56.099 real 0m0.066s 00:09:56.099 user 0m0.033s 00:09:56.099 sys 0m0.034s 00:09:56.099 12:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.099 ************************************ 00:09:56.099 END TEST unittest_vhost 00:09:56.099 ************************************ 00:09:56.099 12:29:38 -- common/autotest_common.sh@10 -- # set +x 00:09:56.099 12:29:38 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:56.099 12:29:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:56.099 12:29:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:56.099 12:29:38 -- common/autotest_common.sh@10 -- # set +x 00:09:56.099 ************************************ 00:09:56.099 START TEST unittest_dma 00:09:56.099 ************************************ 00:09:56.099 12:29:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:56.099 00:09:56.099 00:09:56.099 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.099 http://cunit.sourceforge.net/ 00:09:56.099 00:09:56.099 00:09:56.099 Suite: dma_suite 00:09:56.099 Test: test_dma ...[2024-10-01 12:29:38.565082] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:09:56.099 passed 00:09:56.099 00:09:56.099 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.099 suites 1 1 n/a 0 0 00:09:56.099 tests 1 1 1 0 0 00:09:56.099 asserts 50 50 50 0 n/a 00:09:56.099 00:09:56.099 Elapsed time = 0.001 seconds 00:09:56.099 00:09:56.099 real 0m0.050s 00:09:56.099 user 0m0.033s 00:09:56.099 sys 0m0.018s 00:09:56.099 12:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.099 12:29:38 -- common/autotest_common.sh@10 -- # set +x 00:09:56.099 ************************************ 00:09:56.099 END TEST unittest_dma 00:09:56.099 ************************************ 00:09:56.357 12:29:38 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:09:56.357 12:29:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:56.357 12:29:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:56.357 12:29:38 -- common/autotest_common.sh@10 -- # set +x 00:09:56.357 ************************************ 00:09:56.357 START TEST unittest_init 00:09:56.357 ************************************ 00:09:56.357 12:29:38 -- common/autotest_common.sh@1104 -- # unittest_init 00:09:56.357 12:29:38 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:09:56.357 00:09:56.357 00:09:56.357 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.357 http://cunit.sourceforge.net/ 00:09:56.357 00:09:56.357 00:09:56.357 Suite: subsystem_suite 00:09:56.357 Test: subsystem_sort_test_depends_on_single ...passed 00:09:56.357 Test: subsystem_sort_test_depends_on_multiple ...passed 00:09:56.357 Test: subsystem_sort_test_missing_dependency ...[2024-10-01 12:29:38.699636] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:09:56.357 passed 00:09:56.357 00:09:56.357 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.357 suites 1 1 n/a 0 0 00:09:56.357 tests 3 3 3 0 0 00:09:56.357 asserts 20 20 20 0 n/a 00:09:56.357 00:09:56.357 Elapsed time = 0.001 seconds 00:09:56.357 [2024-10-01 12:29:38.700093] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:09:56.357 00:09:56.357 real 0m0.055s 00:09:56.357 user 0m0.031s 00:09:56.357 sys 0m0.024s 00:09:56.357 12:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.357 ************************************ 00:09:56.357 END TEST unittest_init 00:09:56.357 12:29:38 -- common/autotest_common.sh@10 -- # set +x 00:09:56.357 ************************************ 00:09:56.357 12:29:38 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:09:56.357 12:29:38 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:09:56.357 12:29:38 -- unit/unittest.sh@290 -- # hostname 00:09:56.357 12:29:38 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:56.616 geninfo: WARNING: invalid characters removed from testname! 00:10:23.262 12:30:02 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:10:24.197 12:30:06 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:26.733 12:30:09 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:29.304 12:30:11 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:31.840 12:30:13 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:33.745 12:30:16 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:36.281 12:30:18 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:38.186 12:30:20 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:10:38.186 12:30:20 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:38.754 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:38.754 Found 309 entries. 00:10:38.754 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:10:38.754 Writing .css and .png files. 00:10:38.754 Generating output. 00:10:38.754 Processing file include/linux/virtio_ring.h 00:10:39.014 Processing file include/spdk/endian.h 00:10:39.014 Processing file include/spdk/nvme_spec.h 00:10:39.014 Processing file include/spdk/bdev_module.h 00:10:39.014 Processing file include/spdk/thread.h 00:10:39.014 Processing file include/spdk/mmio.h 00:10:39.014 Processing file include/spdk/util.h 00:10:39.014 Processing file include/spdk/base64.h 00:10:39.014 Processing file include/spdk/nvmf_transport.h 00:10:39.014 Processing file include/spdk/histogram_data.h 00:10:39.014 Processing file include/spdk/nvme.h 00:10:39.014 Processing file include/spdk/trace.h 00:10:39.273 Processing file include/spdk_internal/utf.h 00:10:39.273 Processing file include/spdk_internal/sock.h 00:10:39.273 Processing file include/spdk_internal/nvme_tcp.h 00:10:39.273 Processing file include/spdk_internal/sgl.h 00:10:39.273 Processing file include/spdk_internal/virtio.h 00:10:39.273 Processing file include/spdk_internal/rdma.h 00:10:39.533 Processing file lib/accel/accel_sw.c 00:10:39.533 Processing file lib/accel/accel_rpc.c 00:10:39.533 Processing file lib/accel/accel.c 00:10:39.793 Processing file lib/bdev/scsi_nvme.c 00:10:39.793 Processing file lib/bdev/part.c 00:10:39.793 Processing file lib/bdev/bdev_rpc.c 00:10:39.793 Processing file lib/bdev/bdev.c 00:10:39.793 Processing file lib/bdev/bdev_zone.c 00:10:40.054 Processing file lib/blob/blobstore.h 00:10:40.054 Processing file lib/blob/request.c 00:10:40.054 Processing file lib/blob/zeroes.c 00:10:40.054 Processing file lib/blob/blob_bs_dev.c 00:10:40.054 Processing file lib/blob/blobstore.c 00:10:40.054 Processing file lib/blobfs/blobfs.c 00:10:40.054 Processing file lib/blobfs/tree.c 00:10:40.054 Processing file lib/conf/conf.c 00:10:40.054 Processing file lib/dma/dma.c 00:10:40.623 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:10:40.623 Processing file lib/env_dpdk/pci_virtio.c 00:10:40.623 Processing file lib/env_dpdk/env.c 00:10:40.624 Processing file lib/env_dpdk/threads.c 00:10:40.624 Processing file lib/env_dpdk/pci_vmd.c 00:10:40.624 Processing file lib/env_dpdk/pci_ioat.c 00:10:40.624 Processing file lib/env_dpdk/pci.c 00:10:40.624 Processing file lib/env_dpdk/init.c 00:10:40.624 Processing file lib/env_dpdk/pci_event.c 00:10:40.624 Processing file lib/env_dpdk/pci_idxd.c 00:10:40.624 Processing file lib/env_dpdk/memory.c 00:10:40.624 Processing file lib/env_dpdk/sigbus_handler.c 00:10:40.624 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:10:40.624 Processing file lib/env_dpdk/pci_dpdk.c 00:10:40.624 Processing file lib/event/log_rpc.c 00:10:40.624 Processing file lib/event/reactor.c 00:10:40.624 Processing file lib/event/app.c 00:10:40.624 Processing file lib/event/scheduler_static.c 00:10:40.624 Processing file lib/event/app_rpc.c 00:10:41.191 Processing file lib/ftl/ftl_band.c 00:10:41.191 Processing file lib/ftl/ftl_trace.c 00:10:41.191 Processing file lib/ftl/ftl_band_ops.c 00:10:41.191 Processing file lib/ftl/ftl_sb.c 00:10:41.191 Processing file lib/ftl/ftl_debug.c 00:10:41.191 Processing file lib/ftl/ftl_nv_cache.h 00:10:41.191 Processing file lib/ftl/ftl_writer.c 00:10:41.191 Processing file lib/ftl/ftl_core.h 00:10:41.191 Processing file lib/ftl/ftl_reloc.c 00:10:41.191 Processing file lib/ftl/ftl_rq.c 00:10:41.191 Processing file lib/ftl/ftl_writer.h 00:10:41.191 Processing file lib/ftl/ftl_band.h 00:10:41.191 Processing file lib/ftl/ftl_p2l.c 00:10:41.191 Processing file lib/ftl/ftl_core.c 00:10:41.191 Processing file lib/ftl/ftl_nv_cache_io.h 00:10:41.191 Processing file lib/ftl/ftl_l2p_cache.c 00:10:41.191 Processing file lib/ftl/ftl_nv_cache.c 00:10:41.191 Processing file lib/ftl/ftl_l2p_flat.c 00:10:41.191 Processing file lib/ftl/ftl_debug.h 00:10:41.191 Processing file lib/ftl/ftl_io.c 00:10:41.191 Processing file lib/ftl/ftl_init.c 00:10:41.191 Processing file lib/ftl/ftl_l2p.c 00:10:41.191 Processing file lib/ftl/ftl_io.h 00:10:41.191 Processing file lib/ftl/ftl_layout.c 00:10:41.191 Processing file lib/ftl/base/ftl_base_bdev.c 00:10:41.191 Processing file lib/ftl/base/ftl_base_dev.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:10:41.759 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:10:41.759 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:10:41.759 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:10:42.017 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:10:42.017 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:10:42.017 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:10:42.017 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:10:42.276 Processing file lib/ftl/utils/ftl_addr_utils.h 00:10:42.276 Processing file lib/ftl/utils/ftl_md.c 00:10:42.276 Processing file lib/ftl/utils/ftl_conf.c 00:10:42.276 Processing file lib/ftl/utils/ftl_bitmap.c 00:10:42.276 Processing file lib/ftl/utils/ftl_mempool.c 00:10:42.276 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:10:42.276 Processing file lib/ftl/utils/ftl_property.h 00:10:42.276 Processing file lib/ftl/utils/ftl_property.c 00:10:42.276 Processing file lib/ftl/utils/ftl_df.h 00:10:42.276 Processing file lib/idxd/idxd.c 00:10:42.276 Processing file lib/idxd/idxd_user.c 00:10:42.276 Processing file lib/idxd/idxd_internal.h 00:10:42.535 Processing file lib/init/json_config.c 00:10:42.535 Processing file lib/init/rpc.c 00:10:42.535 Processing file lib/init/subsystem_rpc.c 00:10:42.535 Processing file lib/init/subsystem.c 00:10:42.535 Processing file lib/ioat/ioat_internal.h 00:10:42.535 Processing file lib/ioat/ioat.c 00:10:43.106 Processing file lib/iscsi/task.h 00:10:43.106 Processing file lib/iscsi/iscsi_rpc.c 00:10:43.106 Processing file lib/iscsi/md5.c 00:10:43.106 Processing file lib/iscsi/tgt_node.c 00:10:43.106 Processing file lib/iscsi/param.c 00:10:43.106 Processing file lib/iscsi/iscsi_subsystem.c 00:10:43.106 Processing file lib/iscsi/portal_grp.c 00:10:43.106 Processing file lib/iscsi/init_grp.c 00:10:43.106 Processing file lib/iscsi/conn.c 00:10:43.106 Processing file lib/iscsi/task.c 00:10:43.106 Processing file lib/iscsi/iscsi.h 00:10:43.106 Processing file lib/iscsi/iscsi.c 00:10:43.106 Processing file lib/json/json_util.c 00:10:43.106 Processing file lib/json/json_write.c 00:10:43.106 Processing file lib/json/json_parse.c 00:10:43.364 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:10:43.364 Processing file lib/jsonrpc/jsonrpc_client.c 00:10:43.364 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:10:43.364 Processing file lib/jsonrpc/jsonrpc_server.c 00:10:43.624 Processing file lib/log/log_deprecated.c 00:10:43.624 Processing file lib/log/log_flags.c 00:10:43.624 Processing file lib/log/log.c 00:10:43.624 Processing file lib/lvol/lvol.c 00:10:43.883 Processing file lib/nbd/nbd_rpc.c 00:10:43.883 Processing file lib/nbd/nbd.c 00:10:43.883 Processing file lib/notify/notify_rpc.c 00:10:43.883 Processing file lib/notify/notify.c 00:10:44.819 Processing file lib/nvme/nvme_zns.c 00:10:44.819 Processing file lib/nvme/nvme_io_msg.c 00:10:44.819 Processing file lib/nvme/nvme.c 00:10:44.819 Processing file lib/nvme/nvme_internal.h 00:10:44.819 Processing file lib/nvme/nvme_ctrlr.c 00:10:44.819 Processing file lib/nvme/nvme_ns_cmd.c 00:10:44.819 Processing file lib/nvme/nvme_vfio_user.c 00:10:44.819 Processing file lib/nvme/nvme_rdma.c 00:10:44.819 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:10:44.819 Processing file lib/nvme/nvme_cuse.c 00:10:44.819 Processing file lib/nvme/nvme_pcie_internal.h 00:10:44.819 Processing file lib/nvme/nvme_ns.c 00:10:44.819 Processing file lib/nvme/nvme_fabric.c 00:10:44.819 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:10:44.819 Processing file lib/nvme/nvme_pcie_common.c 00:10:44.819 Processing file lib/nvme/nvme_discovery.c 00:10:44.819 Processing file lib/nvme/nvme_transport.c 00:10:44.819 Processing file lib/nvme/nvme_qpair.c 00:10:44.819 Processing file lib/nvme/nvme_poll_group.c 00:10:44.819 Processing file lib/nvme/nvme_tcp.c 00:10:44.819 Processing file lib/nvme/nvme_pcie.c 00:10:44.819 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:10:44.819 Processing file lib/nvme/nvme_quirks.c 00:10:44.819 Processing file lib/nvme/nvme_opal.c 00:10:45.077 Processing file lib/nvmf/transport.c 00:10:45.077 Processing file lib/nvmf/nvmf.c 00:10:45.077 Processing file lib/nvmf/ctrlr_discovery.c 00:10:45.077 Processing file lib/nvmf/nvmf_rpc.c 00:10:45.077 Processing file lib/nvmf/nvmf_internal.h 00:10:45.077 Processing file lib/nvmf/ctrlr.c 00:10:45.077 Processing file lib/nvmf/tcp.c 00:10:45.077 Processing file lib/nvmf/ctrlr_bdev.c 00:10:45.077 Processing file lib/nvmf/rdma.c 00:10:45.077 Processing file lib/nvmf/subsystem.c 00:10:45.077 Processing file lib/rdma/rdma_verbs.c 00:10:45.077 Processing file lib/rdma/common.c 00:10:45.336 Processing file lib/rpc/rpc.c 00:10:45.594 Processing file lib/scsi/dev.c 00:10:45.594 Processing file lib/scsi/task.c 00:10:45.594 Processing file lib/scsi/scsi_rpc.c 00:10:45.594 Processing file lib/scsi/scsi.c 00:10:45.594 Processing file lib/scsi/lun.c 00:10:45.594 Processing file lib/scsi/scsi_bdev.c 00:10:45.594 Processing file lib/scsi/scsi_pr.c 00:10:45.594 Processing file lib/scsi/port.c 00:10:45.594 Processing file lib/sock/sock_rpc.c 00:10:45.594 Processing file lib/sock/sock.c 00:10:45.890 Processing file lib/thread/thread.c 00:10:45.890 Processing file lib/thread/iobuf.c 00:10:45.890 Processing file lib/trace/trace_rpc.c 00:10:45.890 Processing file lib/trace/trace.c 00:10:45.890 Processing file lib/trace/trace_flags.c 00:10:46.159 Processing file lib/trace_parser/trace.cpp 00:10:46.159 Processing file lib/ut/ut.c 00:10:46.159 Processing file lib/ut_mock/mock.c 00:10:46.727 Processing file lib/util/fd.c 00:10:46.727 Processing file lib/util/uuid.c 00:10:46.727 Processing file lib/util/crc32.c 00:10:46.727 Processing file lib/util/xor.c 00:10:46.727 Processing file lib/util/dif.c 00:10:46.727 Processing file lib/util/crc32c.c 00:10:46.727 Processing file lib/util/crc64.c 00:10:46.727 Processing file lib/util/iov.c 00:10:46.727 Processing file lib/util/string.c 00:10:46.727 Processing file lib/util/crc16.c 00:10:46.727 Processing file lib/util/pipe.c 00:10:46.727 Processing file lib/util/crc32_ieee.c 00:10:46.727 Processing file lib/util/hexlify.c 00:10:46.727 Processing file lib/util/strerror_tls.c 00:10:46.727 Processing file lib/util/math.c 00:10:46.727 Processing file lib/util/cpuset.c 00:10:46.727 Processing file lib/util/file.c 00:10:46.727 Processing file lib/util/base64.c 00:10:46.727 Processing file lib/util/bit_array.c 00:10:46.727 Processing file lib/util/fd_group.c 00:10:46.727 Processing file lib/util/zipf.c 00:10:46.727 Processing file lib/vfio_user/host/vfio_user.c 00:10:46.727 Processing file lib/vfio_user/host/vfio_user_pci.c 00:10:46.984 Processing file lib/vhost/vhost_internal.h 00:10:46.984 Processing file lib/vhost/vhost_rpc.c 00:10:46.984 Processing file lib/vhost/vhost_blk.c 00:10:46.984 Processing file lib/vhost/vhost.c 00:10:46.984 Processing file lib/vhost/vhost_scsi.c 00:10:46.984 Processing file lib/vhost/rte_vhost_user.c 00:10:47.243 Processing file lib/virtio/virtio_pci.c 00:10:47.243 Processing file lib/virtio/virtio_vhost_user.c 00:10:47.243 Processing file lib/virtio/virtio_vfio_user.c 00:10:47.243 Processing file lib/virtio/virtio.c 00:10:47.243 Processing file lib/vmd/led.c 00:10:47.243 Processing file lib/vmd/vmd.c 00:10:47.500 Processing file module/accel/dsa/accel_dsa.c 00:10:47.500 Processing file module/accel/dsa/accel_dsa_rpc.c 00:10:47.500 Processing file module/accel/error/accel_error.c 00:10:47.500 Processing file module/accel/error/accel_error_rpc.c 00:10:47.500 Processing file module/accel/iaa/accel_iaa_rpc.c 00:10:47.500 Processing file module/accel/iaa/accel_iaa.c 00:10:47.758 Processing file module/accel/ioat/accel_ioat_rpc.c 00:10:47.758 Processing file module/accel/ioat/accel_ioat.c 00:10:47.758 Processing file module/bdev/aio/bdev_aio_rpc.c 00:10:47.758 Processing file module/bdev/aio/bdev_aio.c 00:10:48.015 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:10:48.015 Processing file module/bdev/delay/vbdev_delay.c 00:10:48.015 Processing file module/bdev/error/vbdev_error.c 00:10:48.015 Processing file module/bdev/error/vbdev_error_rpc.c 00:10:48.015 Processing file module/bdev/ftl/bdev_ftl.c 00:10:48.015 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:10:48.273 Processing file module/bdev/gpt/gpt.c 00:10:48.273 Processing file module/bdev/gpt/gpt.h 00:10:48.273 Processing file module/bdev/gpt/vbdev_gpt.c 00:10:48.273 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:10:48.273 Processing file module/bdev/iscsi/bdev_iscsi.c 00:10:48.531 Processing file module/bdev/lvol/vbdev_lvol.c 00:10:48.531 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:10:48.531 Processing file module/bdev/malloc/bdev_malloc.c 00:10:48.531 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:10:48.789 Processing file module/bdev/null/bdev_null.c 00:10:48.789 Processing file module/bdev/null/bdev_null_rpc.c 00:10:49.047 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:10:49.047 Processing file module/bdev/nvme/bdev_mdns_client.c 00:10:49.047 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:10:49.047 Processing file module/bdev/nvme/nvme_rpc.c 00:10:49.047 Processing file module/bdev/nvme/vbdev_opal.c 00:10:49.047 Processing file module/bdev/nvme/bdev_nvme.c 00:10:49.047 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:10:49.047 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:10:49.047 Processing file module/bdev/passthru/vbdev_passthru.c 00:10:49.306 Processing file module/bdev/raid/concat.c 00:10:49.306 Processing file module/bdev/raid/bdev_raid.h 00:10:49.306 Processing file module/bdev/raid/raid0.c 00:10:49.306 Processing file module/bdev/raid/bdev_raid_sb.c 00:10:49.306 Processing file module/bdev/raid/raid5f.c 00:10:49.306 Processing file module/bdev/raid/bdev_raid_rpc.c 00:10:49.306 Processing file module/bdev/raid/bdev_raid.c 00:10:49.306 Processing file module/bdev/raid/raid1.c 00:10:49.306 Processing file module/bdev/split/vbdev_split.c 00:10:49.306 Processing file module/bdev/split/vbdev_split_rpc.c 00:10:49.563 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:10:49.563 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:10:49.563 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:10:49.563 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:10:49.563 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:10:49.821 Processing file module/blob/bdev/blob_bdev.c 00:10:49.821 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:10:49.821 Processing file module/blobfs/bdev/blobfs_bdev.c 00:10:49.821 Processing file module/env_dpdk/env_dpdk_rpc.c 00:10:49.821 Processing file module/event/subsystems/accel/accel.c 00:10:50.079 Processing file module/event/subsystems/bdev/bdev.c 00:10:50.079 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:10:50.079 Processing file module/event/subsystems/iobuf/iobuf.c 00:10:50.079 Processing file module/event/subsystems/iscsi/iscsi.c 00:10:50.338 Processing file module/event/subsystems/nbd/nbd.c 00:10:50.338 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:10:50.338 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:10:50.338 Processing file module/event/subsystems/scheduler/scheduler.c 00:10:50.338 Processing file module/event/subsystems/scsi/scsi.c 00:10:50.596 Processing file module/event/subsystems/sock/sock.c 00:10:50.596 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:10:50.596 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:10:50.854 Processing file module/event/subsystems/vmd/vmd.c 00:10:50.854 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:10:50.854 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:10:50.854 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:10:50.854 Processing file module/scheduler/gscheduler/gscheduler.c 00:10:51.154 Processing file module/sock/sock_kernel.h 00:10:51.154 Processing file module/sock/posix/posix.c 00:10:51.154 Writing directory view page. 00:10:51.154 Overall coverage rate: 00:10:51.154 lines......: 39.1% (39266 of 100422 lines) 00:10:51.154 functions..: 42.8% (3587 of 8384 functions) 00:10:51.154 00:10:51.154 00:10:51.154 ===================== 00:10:51.154 All unit tests passed 00:10:51.154 ===================== 00:10:51.154 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:51.154 12:30:33 -- unit/unittest.sh@302 -- # set +x 00:10:51.154 00:10:51.154 00:10:51.154 00:10:51.154 real 3m10.834s 00:10:51.154 user 2m39.141s 00:10:51.154 sys 0m20.758s 00:10:51.154 12:30:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.154 ************************************ 00:10:51.154 END TEST unittest 00:10:51.154 12:30:33 -- common/autotest_common.sh@10 -- # set +x 00:10:51.154 ************************************ 00:10:51.154 12:30:33 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:10:51.154 12:30:33 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:10:51.154 12:30:33 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:10:51.154 12:30:33 -- spdk/autotest.sh@173 -- # timing_enter lib 00:10:51.154 12:30:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:51.154 12:30:33 -- common/autotest_common.sh@10 -- # set +x 00:10:51.154 12:30:33 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:51.154 12:30:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:51.154 12:30:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.154 12:30:33 -- common/autotest_common.sh@10 -- # set +x 00:10:51.154 ************************************ 00:10:51.154 START TEST env 00:10:51.154 ************************************ 00:10:51.154 12:30:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:51.413 * Looking for test storage... 00:10:51.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:51.413 12:30:33 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:51.413 12:30:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:51.413 12:30:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.413 12:30:33 -- common/autotest_common.sh@10 -- # set +x 00:10:51.413 ************************************ 00:10:51.413 START TEST env_memory 00:10:51.413 ************************************ 00:10:51.413 12:30:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:51.413 00:10:51.413 00:10:51.413 CUnit - A unit testing framework for C - Version 2.1-3 00:10:51.413 http://cunit.sourceforge.net/ 00:10:51.413 00:10:51.413 00:10:51.413 Suite: memory 00:10:51.413 Test: alloc and free memory map ...[2024-10-01 12:30:33.850594] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:51.413 passed 00:10:51.413 Test: mem map translation ...[2024-10-01 12:30:33.885335] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:51.413 [2024-10-01 12:30:33.885560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:51.413 [2024-10-01 12:30:33.885713] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:51.413 [2024-10-01 12:30:33.885878] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:51.413 passed 00:10:51.413 Test: mem map registration ...[2024-10-01 12:30:33.941292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:10:51.413 [2024-10-01 12:30:33.941487] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:10:51.673 passed 00:10:51.673 Test: mem map adjacent registrations ...passed 00:10:51.673 00:10:51.673 Run Summary: Type Total Ran Passed Failed Inactive 00:10:51.673 suites 1 1 n/a 0 0 00:10:51.673 tests 4 4 4 0 0 00:10:51.673 asserts 152 152 152 0 n/a 00:10:51.673 00:10:51.673 Elapsed time = 0.200 seconds 00:10:51.673 00:10:51.673 real 0m0.249s 00:10:51.673 user 0m0.216s 00:10:51.673 sys 0m0.032s 00:10:51.673 12:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.673 12:30:34 -- common/autotest_common.sh@10 -- # set +x 00:10:51.673 ************************************ 00:10:51.673 END TEST env_memory 00:10:51.673 ************************************ 00:10:51.673 12:30:34 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:51.673 12:30:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:51.673 12:30:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.673 12:30:34 -- common/autotest_common.sh@10 -- # set +x 00:10:51.673 ************************************ 00:10:51.673 START TEST env_vtophys 00:10:51.673 ************************************ 00:10:51.673 12:30:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:51.673 EAL: lib.eal log level changed from notice to debug 00:10:51.673 EAL: Detected lcore 0 as core 0 on socket 0 00:10:51.673 EAL: Detected lcore 1 as core 0 on socket 0 00:10:51.673 EAL: Detected lcore 2 as core 0 on socket 0 00:10:51.673 EAL: Detected lcore 3 as core 0 on socket 0 00:10:51.673 EAL: Detected lcore 4 as core 0 on socket 0 00:10:51.673 EAL: Detected lcore 5 as core 0 on socket 0 00:10:51.673 EAL: Detected lcore 6 as core 0 on socket 0 00:10:51.673 EAL: Detected lcore 7 as core 0 on socket 0 00:10:51.673 EAL: Detected lcore 8 as core 0 on socket 0 00:10:51.673 EAL: Detected lcore 9 as core 0 on socket 0 00:10:51.673 EAL: Maximum logical cores by configuration: 128 00:10:51.673 EAL: Detected CPU lcores: 10 00:10:51.673 EAL: Detected NUMA nodes: 1 00:10:51.673 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:10:51.673 EAL: Checking presence of .so 'librte_eal.so.24' 00:10:51.673 EAL: Checking presence of .so 'librte_eal.so' 00:10:51.673 EAL: Detected static linkage of DPDK 00:10:51.933 EAL: No shared files mode enabled, IPC will be disabled 00:10:51.933 EAL: Selected IOVA mode 'PA' 00:10:51.933 EAL: Probing VFIO support... 00:10:51.933 EAL: IOMMU type 1 (Type 1) is supported 00:10:51.933 EAL: IOMMU type 7 (sPAPR) is not supported 00:10:51.933 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:10:51.933 EAL: VFIO support initialized 00:10:51.933 EAL: Ask a virtual area of 0x2e000 bytes 00:10:51.933 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:51.933 EAL: Setting up physically contiguous memory... 00:10:51.933 EAL: Setting maximum number of open files to 1048576 00:10:51.933 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:51.933 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:51.933 EAL: Ask a virtual area of 0x61000 bytes 00:10:51.933 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:51.933 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:51.933 EAL: Ask a virtual area of 0x400000000 bytes 00:10:51.933 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:51.933 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:51.933 EAL: Ask a virtual area of 0x61000 bytes 00:10:51.933 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:51.933 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:51.933 EAL: Ask a virtual area of 0x400000000 bytes 00:10:51.933 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:51.933 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:51.933 EAL: Ask a virtual area of 0x61000 bytes 00:10:51.933 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:51.933 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:51.933 EAL: Ask a virtual area of 0x400000000 bytes 00:10:51.933 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:51.933 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:51.933 EAL: Ask a virtual area of 0x61000 bytes 00:10:51.933 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:51.933 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:51.933 EAL: Ask a virtual area of 0x400000000 bytes 00:10:51.933 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:51.933 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:51.933 EAL: Hugepages will be freed exactly as allocated. 00:10:51.933 EAL: No shared files mode enabled, IPC is disabled 00:10:51.933 EAL: No shared files mode enabled, IPC is disabled 00:10:51.933 EAL: TSC frequency is ~2490000 KHz 00:10:51.933 EAL: Main lcore 0 is ready (tid=7f80e6bd5a80;cpuset=[0]) 00:10:51.933 EAL: Trying to obtain current memory policy. 00:10:51.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:51.933 EAL: Restoring previous memory policy: 0 00:10:51.933 EAL: request: mp_malloc_sync 00:10:51.933 EAL: No shared files mode enabled, IPC is disabled 00:10:51.933 EAL: Heap on socket 0 was expanded by 2MB 00:10:51.933 EAL: No shared files mode enabled, IPC is disabled 00:10:51.933 EAL: Mem event callback 'spdk:(nil)' registered 00:10:51.933 00:10:51.933 00:10:51.933 CUnit - A unit testing framework for C - Version 2.1-3 00:10:51.933 http://cunit.sourceforge.net/ 00:10:51.933 00:10:51.933 00:10:51.933 Suite: components_suite 00:10:52.503 Test: vtophys_malloc_test ...passed 00:10:52.503 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:52.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:52.503 EAL: Restoring previous memory policy: 0 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was expanded by 4MB 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was shrunk by 4MB 00:10:52.503 EAL: Trying to obtain current memory policy. 00:10:52.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:52.503 EAL: Restoring previous memory policy: 0 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was expanded by 6MB 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was shrunk by 6MB 00:10:52.503 EAL: Trying to obtain current memory policy. 00:10:52.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:52.503 EAL: Restoring previous memory policy: 0 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was expanded by 10MB 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was shrunk by 10MB 00:10:52.503 EAL: Trying to obtain current memory policy. 00:10:52.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:52.503 EAL: Restoring previous memory policy: 0 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was expanded by 18MB 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was shrunk by 18MB 00:10:52.503 EAL: Trying to obtain current memory policy. 00:10:52.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:52.503 EAL: Restoring previous memory policy: 0 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was expanded by 34MB 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was shrunk by 34MB 00:10:52.503 EAL: Trying to obtain current memory policy. 00:10:52.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:52.503 EAL: Restoring previous memory policy: 0 00:10:52.503 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.503 EAL: request: mp_malloc_sync 00:10:52.503 EAL: No shared files mode enabled, IPC is disabled 00:10:52.503 EAL: Heap on socket 0 was expanded by 66MB 00:10:52.763 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.763 EAL: request: mp_malloc_sync 00:10:52.763 EAL: No shared files mode enabled, IPC is disabled 00:10:52.763 EAL: Heap on socket 0 was shrunk by 66MB 00:10:52.763 EAL: Trying to obtain current memory policy. 00:10:52.763 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:52.763 EAL: Restoring previous memory policy: 0 00:10:52.763 EAL: Calling mem event callback 'spdk:(nil)' 00:10:52.763 EAL: request: mp_malloc_sync 00:10:52.763 EAL: No shared files mode enabled, IPC is disabled 00:10:52.763 EAL: Heap on socket 0 was expanded by 130MB 00:10:53.022 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.022 EAL: request: mp_malloc_sync 00:10:53.022 EAL: No shared files mode enabled, IPC is disabled 00:10:53.022 EAL: Heap on socket 0 was shrunk by 130MB 00:10:53.281 EAL: Trying to obtain current memory policy. 00:10:53.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:53.281 EAL: Restoring previous memory policy: 0 00:10:53.281 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.281 EAL: request: mp_malloc_sync 00:10:53.281 EAL: No shared files mode enabled, IPC is disabled 00:10:53.281 EAL: Heap on socket 0 was expanded by 258MB 00:10:53.851 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.851 EAL: request: mp_malloc_sync 00:10:53.851 EAL: No shared files mode enabled, IPC is disabled 00:10:53.851 EAL: Heap on socket 0 was shrunk by 258MB 00:10:54.110 EAL: Trying to obtain current memory policy. 00:10:54.110 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:54.370 EAL: Restoring previous memory policy: 0 00:10:54.370 EAL: Calling mem event callback 'spdk:(nil)' 00:10:54.370 EAL: request: mp_malloc_sync 00:10:54.370 EAL: No shared files mode enabled, IPC is disabled 00:10:54.370 EAL: Heap on socket 0 was expanded by 514MB 00:10:55.308 EAL: Calling mem event callback 'spdk:(nil)' 00:10:55.308 EAL: request: mp_malloc_sync 00:10:55.308 EAL: No shared files mode enabled, IPC is disabled 00:10:55.308 EAL: Heap on socket 0 was shrunk by 514MB 00:10:55.877 EAL: Trying to obtain current memory policy. 00:10:55.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:56.137 EAL: Restoring previous memory policy: 0 00:10:56.137 EAL: Calling mem event callback 'spdk:(nil)' 00:10:56.137 EAL: request: mp_malloc_sync 00:10:56.137 EAL: No shared files mode enabled, IPC is disabled 00:10:56.137 EAL: Heap on socket 0 was expanded by 1026MB 00:10:58.046 EAL: Calling mem event callback 'spdk:(nil)' 00:10:58.046 EAL: request: mp_malloc_sync 00:10:58.046 EAL: No shared files mode enabled, IPC is disabled 00:10:58.046 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:59.422 passed 00:10:59.422 00:10:59.423 Run Summary: Type Total Ran Passed Failed Inactive 00:10:59.423 suites 1 1 n/a 0 0 00:10:59.423 tests 2 2 2 0 0 00:10:59.423 asserts 6370 6370 6370 0 n/a 00:10:59.423 00:10:59.423 Elapsed time = 7.544 seconds 00:10:59.423 EAL: Calling mem event callback 'spdk:(nil)' 00:10:59.423 EAL: request: mp_malloc_sync 00:10:59.423 EAL: No shared files mode enabled, IPC is disabled 00:10:59.423 EAL: Heap on socket 0 was shrunk by 2MB 00:10:59.423 EAL: No shared files mode enabled, IPC is disabled 00:10:59.423 EAL: No shared files mode enabled, IPC is disabled 00:10:59.423 EAL: No shared files mode enabled, IPC is disabled 00:10:59.682 00:10:59.683 real 0m7.880s 00:10:59.683 user 0m6.930s 00:10:59.683 sys 0m0.801s 00:10:59.683 12:30:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.683 12:30:41 -- common/autotest_common.sh@10 -- # set +x 00:10:59.683 ************************************ 00:10:59.683 END TEST env_vtophys 00:10:59.683 ************************************ 00:10:59.683 12:30:42 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:59.683 12:30:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:59.683 12:30:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:59.683 12:30:42 -- common/autotest_common.sh@10 -- # set +x 00:10:59.683 ************************************ 00:10:59.683 START TEST env_pci 00:10:59.683 ************************************ 00:10:59.683 12:30:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:59.683 00:10:59.683 00:10:59.683 CUnit - A unit testing framework for C - Version 2.1-3 00:10:59.683 http://cunit.sourceforge.net/ 00:10:59.683 00:10:59.683 00:10:59.683 Suite: pci 00:10:59.683 Test: pci_hook ...[2024-10-01 12:30:42.118774] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 103443 has claimed it 00:10:59.683 EAL: Cannot find device (10000:00:01.0) 00:10:59.683 EAL: Failed to attach device on primary process 00:10:59.683 passed 00:10:59.683 00:10:59.683 Run Summary: Type Total Ran Passed Failed Inactive 00:10:59.683 suites 1 1 n/a 0 0 00:10:59.683 tests 1 1 1 0 0 00:10:59.683 asserts 25 25 25 0 n/a 00:10:59.683 00:10:59.683 Elapsed time = 0.007 seconds 00:10:59.683 00:10:59.683 real 0m0.122s 00:10:59.683 user 0m0.035s 00:10:59.683 sys 0m0.088s 00:10:59.683 12:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.683 12:30:42 -- common/autotest_common.sh@10 -- # set +x 00:10:59.683 ************************************ 00:10:59.683 END TEST env_pci 00:10:59.683 ************************************ 00:10:59.942 12:30:42 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:59.942 12:30:42 -- env/env.sh@15 -- # uname 00:10:59.942 12:30:42 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:59.942 12:30:42 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:59.942 12:30:42 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:59.942 12:30:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:59.942 12:30:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:59.942 12:30:42 -- common/autotest_common.sh@10 -- # set +x 00:10:59.942 ************************************ 00:10:59.942 START TEST env_dpdk_post_init 00:10:59.942 ************************************ 00:10:59.942 12:30:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:59.943 EAL: Detected CPU lcores: 10 00:10:59.943 EAL: Detected NUMA nodes: 1 00:10:59.943 EAL: Detected static linkage of DPDK 00:10:59.943 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:59.943 EAL: Selected IOVA mode 'PA' 00:10:59.943 EAL: VFIO support initialized 00:11:00.201 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:00.201 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:11:00.201 Starting DPDK initialization... 00:11:00.201 Starting SPDK post initialization... 00:11:00.201 SPDK NVMe probe 00:11:00.201 Attaching to 0000:00:06.0 00:11:00.201 Attached to 0000:00:06.0 00:11:00.201 Cleaning up... 00:11:00.201 00:11:00.201 real 0m0.296s 00:11:00.201 user 0m0.086s 00:11:00.201 sys 0m0.112s 00:11:00.201 12:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.201 12:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:00.201 ************************************ 00:11:00.201 END TEST env_dpdk_post_init 00:11:00.201 ************************************ 00:11:00.201 12:30:42 -- env/env.sh@26 -- # uname 00:11:00.201 12:30:42 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:00.201 12:30:42 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:00.201 12:30:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:00.201 12:30:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:00.201 12:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:00.201 ************************************ 00:11:00.201 START TEST env_mem_callbacks 00:11:00.201 ************************************ 00:11:00.201 12:30:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:00.201 EAL: Detected CPU lcores: 10 00:11:00.201 EAL: Detected NUMA nodes: 1 00:11:00.201 EAL: Detected static linkage of DPDK 00:11:00.201 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:00.460 EAL: Selected IOVA mode 'PA' 00:11:00.460 EAL: VFIO support initialized 00:11:00.460 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:00.460 00:11:00.460 00:11:00.460 CUnit - A unit testing framework for C - Version 2.1-3 00:11:00.460 http://cunit.sourceforge.net/ 00:11:00.460 00:11:00.460 00:11:00.460 Suite: memory 00:11:00.460 Test: test ... 00:11:00.460 register 0x200000200000 2097152 00:11:00.460 malloc 3145728 00:11:00.460 register 0x200000400000 4194304 00:11:00.460 buf 0x2000004fffc0 len 3145728 PASSED 00:11:00.460 malloc 64 00:11:00.460 buf 0x2000004ffec0 len 64 PASSED 00:11:00.460 malloc 4194304 00:11:00.460 register 0x200000800000 6291456 00:11:00.460 buf 0x2000009fffc0 len 4194304 PASSED 00:11:00.460 free 0x2000004fffc0 3145728 00:11:00.460 free 0x2000004ffec0 64 00:11:00.460 unregister 0x200000400000 4194304 PASSED 00:11:00.460 free 0x2000009fffc0 4194304 00:11:00.460 unregister 0x200000800000 6291456 PASSED 00:11:00.460 malloc 8388608 00:11:00.460 register 0x200000400000 10485760 00:11:00.460 buf 0x2000005fffc0 len 8388608 PASSED 00:11:00.460 free 0x2000005fffc0 8388608 00:11:00.460 unregister 0x200000400000 10485760 PASSED 00:11:00.460 passed 00:11:00.460 00:11:00.460 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.460 suites 1 1 n/a 0 0 00:11:00.460 tests 1 1 1 0 0 00:11:00.460 asserts 15 15 15 0 n/a 00:11:00.460 00:11:00.460 Elapsed time = 0.074 seconds 00:11:00.460 00:11:00.460 real 0m0.309s 00:11:00.460 user 0m0.127s 00:11:00.460 sys 0m0.082s 00:11:00.460 12:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.460 12:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:00.460 ************************************ 00:11:00.460 END TEST env_mem_callbacks 00:11:00.460 ************************************ 00:11:00.719 00:11:00.719 real 0m9.366s 00:11:00.719 user 0m7.588s 00:11:00.719 sys 0m1.441s 00:11:00.719 ************************************ 00:11:00.719 END TEST env 00:11:00.719 ************************************ 00:11:00.719 12:30:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.719 12:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:00.719 12:30:43 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:00.719 12:30:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:00.719 12:30:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:00.719 12:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:00.719 ************************************ 00:11:00.719 START TEST rpc 00:11:00.719 ************************************ 00:11:00.719 12:30:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:00.719 * Looking for test storage... 00:11:00.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:00.719 12:30:43 -- rpc/rpc.sh@65 -- # spdk_pid=103582 00:11:00.719 12:30:43 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:00.719 12:30:43 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:00.719 12:30:43 -- rpc/rpc.sh@67 -- # waitforlisten 103582 00:11:00.719 12:30:43 -- common/autotest_common.sh@819 -- # '[' -z 103582 ']' 00:11:00.719 12:30:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.719 12:30:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:00.719 12:30:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.719 12:30:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:00.719 12:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:00.978 [2024-10-01 12:30:43.285014] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:00.978 [2024-10-01 12:30:43.285166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103582 ] 00:11:00.978 [2024-10-01 12:30:43.454771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.237 [2024-10-01 12:30:43.645624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:01.237 [2024-10-01 12:30:43.645841] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:01.237 [2024-10-01 12:30:43.645882] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 103582' to capture a snapshot of events at runtime. 00:11:01.237 [2024-10-01 12:30:43.645901] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid103582 for offline analysis/debug. 00:11:01.237 [2024-10-01 12:30:43.645966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.616 12:30:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:02.616 12:30:44 -- common/autotest_common.sh@852 -- # return 0 00:11:02.616 12:30:44 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:02.616 12:30:44 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:02.616 12:30:44 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:02.616 12:30:44 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:02.616 12:30:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:02.616 12:30:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:02.616 12:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:02.616 ************************************ 00:11:02.616 START TEST rpc_integrity 00:11:02.616 ************************************ 00:11:02.616 12:30:44 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:11:02.616 12:30:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.616 12:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.616 12:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:02.616 12:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.616 12:30:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:02.616 12:30:44 -- rpc/rpc.sh@13 -- # jq length 00:11:02.616 12:30:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:02.616 12:30:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:02.616 12:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.616 12:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:02.616 12:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.616 12:30:44 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:02.616 12:30:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:02.616 12:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.616 12:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:02.616 12:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.616 12:30:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:02.616 { 00:11:02.616 "name": "Malloc0", 00:11:02.616 "aliases": [ 00:11:02.616 "4f6f16a2-bc3a-49c2-b428-0dfafa9a2671" 00:11:02.616 ], 00:11:02.616 "product_name": "Malloc disk", 00:11:02.616 "block_size": 512, 00:11:02.616 "num_blocks": 16384, 00:11:02.616 "uuid": "4f6f16a2-bc3a-49c2-b428-0dfafa9a2671", 00:11:02.616 "assigned_rate_limits": { 00:11:02.616 "rw_ios_per_sec": 0, 00:11:02.616 "rw_mbytes_per_sec": 0, 00:11:02.616 "r_mbytes_per_sec": 0, 00:11:02.616 "w_mbytes_per_sec": 0 00:11:02.616 }, 00:11:02.616 "claimed": false, 00:11:02.616 "zoned": false, 00:11:02.616 "supported_io_types": { 00:11:02.616 "read": true, 00:11:02.616 "write": true, 00:11:02.616 "unmap": true, 00:11:02.616 "write_zeroes": true, 00:11:02.616 "flush": true, 00:11:02.616 "reset": true, 00:11:02.616 "compare": false, 00:11:02.616 "compare_and_write": false, 00:11:02.616 "abort": true, 00:11:02.616 "nvme_admin": false, 00:11:02.616 "nvme_io": false 00:11:02.616 }, 00:11:02.616 "memory_domains": [ 00:11:02.616 { 00:11:02.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.616 "dma_device_type": 2 00:11:02.616 } 00:11:02.616 ], 00:11:02.616 "driver_specific": {} 00:11:02.616 } 00:11:02.616 ]' 00:11:02.616 12:30:44 -- rpc/rpc.sh@17 -- # jq length 00:11:02.616 12:30:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:02.616 12:30:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:02.616 12:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.616 12:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:02.616 [2024-10-01 12:30:44.949693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:02.616 [2024-10-01 12:30:44.949778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.616 [2024-10-01 12:30:44.949822] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:11:02.616 [2024-10-01 12:30:44.949843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.616 [2024-10-01 12:30:44.952013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.616 [2024-10-01 12:30:44.952090] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:02.616 Passthru0 00:11:02.616 12:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.616 12:30:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:02.616 12:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.616 12:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:02.616 12:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.616 12:30:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:02.616 { 00:11:02.616 "name": "Malloc0", 00:11:02.616 "aliases": [ 00:11:02.616 "4f6f16a2-bc3a-49c2-b428-0dfafa9a2671" 00:11:02.616 ], 00:11:02.616 "product_name": "Malloc disk", 00:11:02.616 "block_size": 512, 00:11:02.616 "num_blocks": 16384, 00:11:02.616 "uuid": "4f6f16a2-bc3a-49c2-b428-0dfafa9a2671", 00:11:02.616 "assigned_rate_limits": { 00:11:02.616 "rw_ios_per_sec": 0, 00:11:02.616 "rw_mbytes_per_sec": 0, 00:11:02.616 "r_mbytes_per_sec": 0, 00:11:02.616 "w_mbytes_per_sec": 0 00:11:02.616 }, 00:11:02.616 "claimed": true, 00:11:02.616 "claim_type": "exclusive_write", 00:11:02.616 "zoned": false, 00:11:02.616 "supported_io_types": { 00:11:02.616 "read": true, 00:11:02.616 "write": true, 00:11:02.616 "unmap": true, 00:11:02.616 "write_zeroes": true, 00:11:02.616 "flush": true, 00:11:02.616 "reset": true, 00:11:02.616 "compare": false, 00:11:02.616 "compare_and_write": false, 00:11:02.616 "abort": true, 00:11:02.616 "nvme_admin": false, 00:11:02.616 "nvme_io": false 00:11:02.616 }, 00:11:02.616 "memory_domains": [ 00:11:02.616 { 00:11:02.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.616 "dma_device_type": 2 00:11:02.616 } 00:11:02.616 ], 00:11:02.616 "driver_specific": {} 00:11:02.616 }, 00:11:02.616 { 00:11:02.616 "name": "Passthru0", 00:11:02.616 "aliases": [ 00:11:02.616 "2191a88a-debc-5206-a62d-0bfcbdb10086" 00:11:02.616 ], 00:11:02.616 "product_name": "passthru", 00:11:02.616 "block_size": 512, 00:11:02.616 "num_blocks": 16384, 00:11:02.616 "uuid": "2191a88a-debc-5206-a62d-0bfcbdb10086", 00:11:02.616 "assigned_rate_limits": { 00:11:02.616 "rw_ios_per_sec": 0, 00:11:02.616 "rw_mbytes_per_sec": 0, 00:11:02.616 "r_mbytes_per_sec": 0, 00:11:02.616 "w_mbytes_per_sec": 0 00:11:02.616 }, 00:11:02.616 "claimed": false, 00:11:02.616 "zoned": false, 00:11:02.616 "supported_io_types": { 00:11:02.616 "read": true, 00:11:02.617 "write": true, 00:11:02.617 "unmap": true, 00:11:02.617 "write_zeroes": true, 00:11:02.617 "flush": true, 00:11:02.617 "reset": true, 00:11:02.617 "compare": false, 00:11:02.617 "compare_and_write": false, 00:11:02.617 "abort": true, 00:11:02.617 "nvme_admin": false, 00:11:02.617 "nvme_io": false 00:11:02.617 }, 00:11:02.617 "memory_domains": [ 00:11:02.617 { 00:11:02.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.617 "dma_device_type": 2 00:11:02.617 } 00:11:02.617 ], 00:11:02.617 "driver_specific": { 00:11:02.617 "passthru": { 00:11:02.617 "name": "Passthru0", 00:11:02.617 "base_bdev_name": "Malloc0" 00:11:02.617 } 00:11:02.617 } 00:11:02.617 } 00:11:02.617 ]' 00:11:02.617 12:30:44 -- rpc/rpc.sh@21 -- # jq length 00:11:02.617 12:30:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:02.617 12:30:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:02.617 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.617 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.617 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.617 12:30:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:02.617 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.617 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.617 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.617 12:30:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:02.617 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.617 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.617 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.617 12:30:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:02.617 12:30:45 -- rpc/rpc.sh@26 -- # jq length 00:11:02.617 12:30:45 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:02.617 00:11:02.617 real 0m0.321s 00:11:02.617 user 0m0.190s 00:11:02.617 sys 0m0.041s 00:11:02.617 12:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.617 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.617 ************************************ 00:11:02.617 END TEST rpc_integrity 00:11:02.617 ************************************ 00:11:02.876 12:30:45 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:02.876 12:30:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:02.876 12:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:02.876 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.876 ************************************ 00:11:02.876 START TEST rpc_plugins 00:11:02.876 ************************************ 00:11:02.876 12:30:45 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:11:02.876 12:30:45 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:02.876 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.876 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.876 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.876 12:30:45 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:02.876 12:30:45 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:02.876 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.876 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.876 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.876 12:30:45 -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:02.876 { 00:11:02.876 "name": "Malloc1", 00:11:02.876 "aliases": [ 00:11:02.876 "fa5961c4-aa40-49fd-840f-caad08e8336e" 00:11:02.876 ], 00:11:02.876 "product_name": "Malloc disk", 00:11:02.876 "block_size": 4096, 00:11:02.876 "num_blocks": 256, 00:11:02.876 "uuid": "fa5961c4-aa40-49fd-840f-caad08e8336e", 00:11:02.876 "assigned_rate_limits": { 00:11:02.876 "rw_ios_per_sec": 0, 00:11:02.876 "rw_mbytes_per_sec": 0, 00:11:02.876 "r_mbytes_per_sec": 0, 00:11:02.876 "w_mbytes_per_sec": 0 00:11:02.876 }, 00:11:02.876 "claimed": false, 00:11:02.876 "zoned": false, 00:11:02.876 "supported_io_types": { 00:11:02.876 "read": true, 00:11:02.876 "write": true, 00:11:02.876 "unmap": true, 00:11:02.876 "write_zeroes": true, 00:11:02.876 "flush": true, 00:11:02.876 "reset": true, 00:11:02.876 "compare": false, 00:11:02.876 "compare_and_write": false, 00:11:02.876 "abort": true, 00:11:02.876 "nvme_admin": false, 00:11:02.876 "nvme_io": false 00:11:02.876 }, 00:11:02.876 "memory_domains": [ 00:11:02.876 { 00:11:02.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.876 "dma_device_type": 2 00:11:02.876 } 00:11:02.876 ], 00:11:02.876 "driver_specific": {} 00:11:02.876 } 00:11:02.876 ]' 00:11:02.876 12:30:45 -- rpc/rpc.sh@32 -- # jq length 00:11:02.876 12:30:45 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:02.876 12:30:45 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:02.876 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.876 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.876 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.876 12:30:45 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:02.876 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.876 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.876 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.876 12:30:45 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:02.876 12:30:45 -- rpc/rpc.sh@36 -- # jq length 00:11:02.876 12:30:45 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:02.876 00:11:02.876 real 0m0.148s 00:11:02.876 user 0m0.101s 00:11:02.876 sys 0m0.015s 00:11:02.876 12:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.876 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.876 ************************************ 00:11:02.876 END TEST rpc_plugins 00:11:02.876 ************************************ 00:11:02.876 12:30:45 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:02.876 12:30:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:02.876 12:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:02.876 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.136 ************************************ 00:11:03.136 START TEST rpc_trace_cmd_test 00:11:03.136 ************************************ 00:11:03.136 12:30:45 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:11:03.136 12:30:45 -- rpc/rpc.sh@40 -- # local info 00:11:03.136 12:30:45 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:03.136 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.136 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.136 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.136 12:30:45 -- rpc/rpc.sh@42 -- # info='{ 00:11:03.136 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid103582", 00:11:03.136 "tpoint_group_mask": "0x8", 00:11:03.136 "iscsi_conn": { 00:11:03.136 "mask": "0x2", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "scsi": { 00:11:03.136 "mask": "0x4", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "bdev": { 00:11:03.136 "mask": "0x8", 00:11:03.136 "tpoint_mask": "0xffffffffffffffff" 00:11:03.136 }, 00:11:03.136 "nvmf_rdma": { 00:11:03.136 "mask": "0x10", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "nvmf_tcp": { 00:11:03.136 "mask": "0x20", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "ftl": { 00:11:03.136 "mask": "0x40", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "blobfs": { 00:11:03.136 "mask": "0x80", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "dsa": { 00:11:03.136 "mask": "0x200", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "thread": { 00:11:03.136 "mask": "0x400", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "nvme_pcie": { 00:11:03.136 "mask": "0x800", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "iaa": { 00:11:03.136 "mask": "0x1000", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "nvme_tcp": { 00:11:03.136 "mask": "0x2000", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 }, 00:11:03.136 "bdev_nvme": { 00:11:03.136 "mask": "0x4000", 00:11:03.136 "tpoint_mask": "0x0" 00:11:03.136 } 00:11:03.136 }' 00:11:03.136 12:30:45 -- rpc/rpc.sh@43 -- # jq length 00:11:03.136 12:30:45 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:11:03.136 12:30:45 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:03.136 12:30:45 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:03.136 12:30:45 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:03.136 12:30:45 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:03.136 12:30:45 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:03.136 12:30:45 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:03.136 12:30:45 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:03.396 12:30:45 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:03.396 00:11:03.396 real 0m0.253s 00:11:03.396 user 0m0.202s 00:11:03.396 sys 0m0.043s 00:11:03.396 12:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.396 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.396 ************************************ 00:11:03.396 END TEST rpc_trace_cmd_test 00:11:03.396 ************************************ 00:11:03.396 12:30:45 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:03.396 12:30:45 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:03.396 12:30:45 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:03.396 12:30:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:03.396 12:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.396 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.396 ************************************ 00:11:03.396 START TEST rpc_daemon_integrity 00:11:03.396 ************************************ 00:11:03.396 12:30:45 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:11:03.396 12:30:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.396 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.396 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.396 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.396 12:30:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:03.396 12:30:45 -- rpc/rpc.sh@13 -- # jq length 00:11:03.396 12:30:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:03.396 12:30:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:03.396 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.396 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.396 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.396 12:30:45 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:03.396 12:30:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:03.396 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.396 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.396 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.396 12:30:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:03.396 { 00:11:03.396 "name": "Malloc2", 00:11:03.396 "aliases": [ 00:11:03.396 "c9388c1b-71ca-4a77-9ff4-e088ef3ab3f3" 00:11:03.396 ], 00:11:03.396 "product_name": "Malloc disk", 00:11:03.396 "block_size": 512, 00:11:03.396 "num_blocks": 16384, 00:11:03.396 "uuid": "c9388c1b-71ca-4a77-9ff4-e088ef3ab3f3", 00:11:03.396 "assigned_rate_limits": { 00:11:03.396 "rw_ios_per_sec": 0, 00:11:03.396 "rw_mbytes_per_sec": 0, 00:11:03.396 "r_mbytes_per_sec": 0, 00:11:03.396 "w_mbytes_per_sec": 0 00:11:03.396 }, 00:11:03.396 "claimed": false, 00:11:03.396 "zoned": false, 00:11:03.396 "supported_io_types": { 00:11:03.396 "read": true, 00:11:03.396 "write": true, 00:11:03.396 "unmap": true, 00:11:03.396 "write_zeroes": true, 00:11:03.396 "flush": true, 00:11:03.396 "reset": true, 00:11:03.396 "compare": false, 00:11:03.396 "compare_and_write": false, 00:11:03.396 "abort": true, 00:11:03.396 "nvme_admin": false, 00:11:03.396 "nvme_io": false 00:11:03.396 }, 00:11:03.396 "memory_domains": [ 00:11:03.396 { 00:11:03.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.396 "dma_device_type": 2 00:11:03.396 } 00:11:03.396 ], 00:11:03.396 "driver_specific": {} 00:11:03.396 } 00:11:03.396 ]' 00:11:03.396 12:30:45 -- rpc/rpc.sh@17 -- # jq length 00:11:03.396 12:30:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:03.396 12:30:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:03.396 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.396 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.396 [2024-10-01 12:30:45.903729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:03.396 [2024-10-01 12:30:45.903808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.396 [2024-10-01 12:30:45.903842] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:03.396 [2024-10-01 12:30:45.903861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.396 [2024-10-01 12:30:45.906047] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.396 [2024-10-01 12:30:45.906115] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:03.396 Passthru0 00:11:03.396 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.396 12:30:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:03.396 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.396 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.656 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.656 12:30:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:03.656 { 00:11:03.656 "name": "Malloc2", 00:11:03.656 "aliases": [ 00:11:03.656 "c9388c1b-71ca-4a77-9ff4-e088ef3ab3f3" 00:11:03.656 ], 00:11:03.656 "product_name": "Malloc disk", 00:11:03.656 "block_size": 512, 00:11:03.656 "num_blocks": 16384, 00:11:03.656 "uuid": "c9388c1b-71ca-4a77-9ff4-e088ef3ab3f3", 00:11:03.656 "assigned_rate_limits": { 00:11:03.656 "rw_ios_per_sec": 0, 00:11:03.656 "rw_mbytes_per_sec": 0, 00:11:03.656 "r_mbytes_per_sec": 0, 00:11:03.656 "w_mbytes_per_sec": 0 00:11:03.656 }, 00:11:03.656 "claimed": true, 00:11:03.656 "claim_type": "exclusive_write", 00:11:03.656 "zoned": false, 00:11:03.656 "supported_io_types": { 00:11:03.656 "read": true, 00:11:03.656 "write": true, 00:11:03.656 "unmap": true, 00:11:03.656 "write_zeroes": true, 00:11:03.656 "flush": true, 00:11:03.656 "reset": true, 00:11:03.656 "compare": false, 00:11:03.656 "compare_and_write": false, 00:11:03.656 "abort": true, 00:11:03.656 "nvme_admin": false, 00:11:03.656 "nvme_io": false 00:11:03.656 }, 00:11:03.656 "memory_domains": [ 00:11:03.656 { 00:11:03.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.656 "dma_device_type": 2 00:11:03.656 } 00:11:03.656 ], 00:11:03.656 "driver_specific": {} 00:11:03.656 }, 00:11:03.656 { 00:11:03.656 "name": "Passthru0", 00:11:03.656 "aliases": [ 00:11:03.656 "da8911ed-c384-5f05-893e-e5b129e4b463" 00:11:03.656 ], 00:11:03.656 "product_name": "passthru", 00:11:03.656 "block_size": 512, 00:11:03.656 "num_blocks": 16384, 00:11:03.656 "uuid": "da8911ed-c384-5f05-893e-e5b129e4b463", 00:11:03.656 "assigned_rate_limits": { 00:11:03.656 "rw_ios_per_sec": 0, 00:11:03.656 "rw_mbytes_per_sec": 0, 00:11:03.656 "r_mbytes_per_sec": 0, 00:11:03.656 "w_mbytes_per_sec": 0 00:11:03.656 }, 00:11:03.656 "claimed": false, 00:11:03.656 "zoned": false, 00:11:03.656 "supported_io_types": { 00:11:03.656 "read": true, 00:11:03.656 "write": true, 00:11:03.656 "unmap": true, 00:11:03.656 "write_zeroes": true, 00:11:03.656 "flush": true, 00:11:03.656 "reset": true, 00:11:03.656 "compare": false, 00:11:03.656 "compare_and_write": false, 00:11:03.656 "abort": true, 00:11:03.656 "nvme_admin": false, 00:11:03.656 "nvme_io": false 00:11:03.656 }, 00:11:03.656 "memory_domains": [ 00:11:03.656 { 00:11:03.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.656 "dma_device_type": 2 00:11:03.656 } 00:11:03.656 ], 00:11:03.656 "driver_specific": { 00:11:03.656 "passthru": { 00:11:03.656 "name": "Passthru0", 00:11:03.656 "base_bdev_name": "Malloc2" 00:11:03.656 } 00:11:03.656 } 00:11:03.656 } 00:11:03.656 ]' 00:11:03.656 12:30:45 -- rpc/rpc.sh@21 -- # jq length 00:11:03.656 12:30:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:03.656 12:30:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:03.656 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.656 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.656 12:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.656 12:30:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:03.656 12:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.656 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.656 12:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.656 12:30:46 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:03.656 12:30:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.656 12:30:46 -- common/autotest_common.sh@10 -- # set +x 00:11:03.656 12:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.656 12:30:46 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:03.656 12:30:46 -- rpc/rpc.sh@26 -- # jq length 00:11:03.656 12:30:46 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:03.656 00:11:03.656 real 0m0.325s 00:11:03.656 user 0m0.201s 00:11:03.656 sys 0m0.029s 00:11:03.656 12:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.656 12:30:46 -- common/autotest_common.sh@10 -- # set +x 00:11:03.656 ************************************ 00:11:03.656 END TEST rpc_daemon_integrity 00:11:03.656 ************************************ 00:11:03.656 12:30:46 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:03.656 12:30:46 -- rpc/rpc.sh@84 -- # killprocess 103582 00:11:03.656 12:30:46 -- common/autotest_common.sh@926 -- # '[' -z 103582 ']' 00:11:03.656 12:30:46 -- common/autotest_common.sh@930 -- # kill -0 103582 00:11:03.656 12:30:46 -- common/autotest_common.sh@931 -- # uname 00:11:03.656 12:30:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:03.656 12:30:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103582 00:11:03.656 12:30:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:03.656 12:30:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:03.656 12:30:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103582' 00:11:03.656 killing process with pid 103582 00:11:03.656 12:30:46 -- common/autotest_common.sh@945 -- # kill 103582 00:11:03.656 12:30:46 -- common/autotest_common.sh@950 -- # wait 103582 00:11:06.190 00:11:06.190 real 0m5.451s 00:11:06.190 user 0m6.195s 00:11:06.190 sys 0m0.882s 00:11:06.190 12:30:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.190 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:11:06.190 ************************************ 00:11:06.190 END TEST rpc 00:11:06.190 ************************************ 00:11:06.190 12:30:48 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:06.191 12:30:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:06.191 12:30:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.191 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:11:06.191 ************************************ 00:11:06.191 START TEST rpc_client 00:11:06.191 ************************************ 00:11:06.191 12:30:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:06.191 * Looking for test storage... 00:11:06.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:06.461 12:30:48 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:06.461 OK 00:11:06.461 12:30:48 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:06.461 00:11:06.461 real 0m0.203s 00:11:06.461 user 0m0.117s 00:11:06.461 sys 0m0.105s 00:11:06.461 12:30:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.461 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:11:06.461 ************************************ 00:11:06.461 END TEST rpc_client 00:11:06.461 ************************************ 00:11:06.461 12:30:48 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:06.461 12:30:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:06.461 12:30:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.461 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:11:06.461 ************************************ 00:11:06.461 START TEST json_config 00:11:06.461 ************************************ 00:11:06.461 12:30:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:06.461 12:30:48 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.461 12:30:48 -- nvmf/common.sh@7 -- # uname -s 00:11:06.461 12:30:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.461 12:30:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.461 12:30:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.461 12:30:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.461 12:30:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.461 12:30:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.461 12:30:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.461 12:30:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.461 12:30:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.461 12:30:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.461 12:30:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:40f760f6-a274-4981-80ad-c7a2e742d35c 00:11:06.461 12:30:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=40f760f6-a274-4981-80ad-c7a2e742d35c 00:11:06.461 12:30:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.461 12:30:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.461 12:30:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:06.461 12:30:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.461 12:30:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.461 12:30:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.461 12:30:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.461 12:30:48 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:06.461 12:30:48 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:06.461 12:30:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:06.461 12:30:48 -- paths/export.sh@5 -- # export PATH 00:11:06.461 12:30:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:06.461 12:30:48 -- nvmf/common.sh@46 -- # : 0 00:11:06.461 12:30:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:06.461 12:30:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:06.461 12:30:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:06.461 12:30:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.461 12:30:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.461 12:30:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:06.461 12:30:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:06.461 12:30:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:06.461 12:30:48 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:11:06.461 12:30:48 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:11:06.461 12:30:48 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:11:06.461 12:30:48 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:06.461 12:30:48 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:11:06.461 12:30:48 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:11:06.462 12:30:48 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:11:06.462 12:30:48 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:11:06.462 12:30:48 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:11:06.462 12:30:48 -- json_config/json_config.sh@32 -- # declare -A app_params 00:11:06.462 12:30:48 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:11:06.462 12:30:48 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:11:06.462 12:30:48 -- json_config/json_config.sh@43 -- # last_event_id=0 00:11:06.719 12:30:48 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:06.719 INFO: JSON configuration test init 00:11:06.719 12:30:48 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:11:06.719 12:30:48 -- json_config/json_config.sh@420 -- # json_config_test_init 00:11:06.719 12:30:48 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:11:06.719 12:30:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:06.719 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:11:06.719 12:30:48 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:11:06.719 12:30:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:06.719 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:11:06.719 12:30:49 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:11:06.719 12:30:49 -- json_config/json_config.sh@98 -- # local app=target 00:11:06.719 12:30:49 -- json_config/json_config.sh@99 -- # shift 00:11:06.719 12:30:49 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:11:06.719 12:30:49 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:11:06.719 12:30:49 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:11:06.719 12:30:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:11:06.719 12:30:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:11:06.719 12:30:49 -- json_config/json_config.sh@111 -- # app_pid[$app]=103875 00:11:06.719 12:30:49 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:11:06.719 Waiting for target to run... 00:11:06.719 12:30:49 -- json_config/json_config.sh@114 -- # waitforlisten 103875 /var/tmp/spdk_tgt.sock 00:11:06.719 12:30:49 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:11:06.719 12:30:49 -- common/autotest_common.sh@819 -- # '[' -z 103875 ']' 00:11:06.719 12:30:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:06.719 12:30:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:06.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:06.719 12:30:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:06.719 12:30:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:06.719 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:06.719 [2024-10-01 12:30:49.082503] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:06.719 [2024-10-01 12:30:49.082636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103875 ] 00:11:06.976 [2024-10-01 12:30:49.478010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.236 [2024-10-01 12:30:49.660877] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:07.236 [2024-10-01 12:30:49.661080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.495 12:30:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:07.495 00:11:07.495 12:30:49 -- common/autotest_common.sh@852 -- # return 0 00:11:07.495 12:30:49 -- json_config/json_config.sh@115 -- # echo '' 00:11:07.495 12:30:49 -- json_config/json_config.sh@322 -- # create_accel_config 00:11:07.495 12:30:49 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:11:07.496 12:30:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:07.496 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:07.496 12:30:49 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:11:07.496 12:30:49 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:11:07.496 12:30:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:07.496 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:07.496 12:30:49 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:11:07.496 12:30:49 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:11:07.496 12:30:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:11:08.435 12:30:50 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:11:08.435 12:30:50 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:11:08.435 12:30:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:08.435 12:30:50 -- common/autotest_common.sh@10 -- # set +x 00:11:08.435 12:30:50 -- json_config/json_config.sh@48 -- # local ret=0 00:11:08.435 12:30:50 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:11:08.435 12:30:50 -- json_config/json_config.sh@49 -- # local enabled_types 00:11:08.435 12:30:50 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:11:08.435 12:30:50 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:11:08.435 12:30:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:11:08.695 12:30:51 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:11:08.695 12:30:51 -- json_config/json_config.sh@51 -- # local get_types 00:11:08.695 12:30:51 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:11:08.695 12:30:51 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:11:08.695 12:30:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:08.695 12:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:08.695 12:30:51 -- json_config/json_config.sh@58 -- # return 0 00:11:08.695 12:30:51 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:11:08.695 12:30:51 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:11:08.695 12:30:51 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:11:08.695 12:30:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:08.695 12:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:08.695 12:30:51 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:11:08.695 12:30:51 -- json_config/json_config.sh@160 -- # local expected_notifications 00:11:08.695 12:30:51 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:11:08.695 12:30:51 -- json_config/json_config.sh@164 -- # get_notifications 00:11:08.695 12:30:51 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:11:08.695 12:30:51 -- json_config/json_config.sh@64 -- # IFS=: 00:11:08.695 12:30:51 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:08.695 12:30:51 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:11:08.695 12:30:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:11:08.695 12:30:51 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:11:08.954 12:30:51 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:11:08.954 12:30:51 -- json_config/json_config.sh@64 -- # IFS=: 00:11:08.954 12:30:51 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:08.954 12:30:51 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:11:08.954 12:30:51 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:11:08.954 12:30:51 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:11:08.954 12:30:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:11:09.214 Nvme0n1p0 Nvme0n1p1 00:11:09.214 12:30:51 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:11:09.214 12:30:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:11:09.473 [2024-10-01 12:30:51.759174] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:11:09.473 [2024-10-01 12:30:51.759272] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:11:09.473 00:11:09.473 12:30:51 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:11:09.473 12:30:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:11:09.473 Malloc3 00:11:09.473 12:30:51 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:11:09.473 12:30:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:11:09.733 [2024-10-01 12:30:52.136018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:09.733 [2024-10-01 12:30:52.136124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.733 [2024-10-01 12:30:52.136161] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:09.733 [2024-10-01 12:30:52.136196] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.733 [2024-10-01 12:30:52.138529] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.733 [2024-10-01 12:30:52.138585] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:11:09.733 PTBdevFromMalloc3 00:11:09.733 12:30:52 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:11:09.733 12:30:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:11:09.992 Null0 00:11:09.992 12:30:52 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:11:09.992 12:30:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:11:09.992 Malloc0 00:11:10.251 12:30:52 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:11:10.251 12:30:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:11:10.251 Malloc1 00:11:10.251 12:30:52 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:11:10.251 12:30:52 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:11:10.819 102400+0 records in 00:11:10.819 102400+0 records out 00:11:10.819 104857600 bytes (105 MB, 100 MiB) copied, 0.338373 s, 310 MB/s 00:11:10.819 12:30:53 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:11:10.819 12:30:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:11:10.819 aio_disk 00:11:10.819 12:30:53 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:11:10.819 12:30:53 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:11:10.819 12:30:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:11:11.078 32537b57-acf2-438e-9903-640261936a8a 00:11:11.078 12:30:53 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:11:11.078 12:30:53 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:11:11.078 12:30:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:11:11.415 12:30:53 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:11:11.415 12:30:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:11:11.415 12:30:53 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:11:11.415 12:30:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:11:11.674 12:30:54 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:11:11.674 12:30:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:11:11.932 12:30:54 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:11:11.932 12:30:54 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:11:11.932 12:30:54 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:f5ee780e-7648-4c93-a2da-7285231d957f bdev_register:ebacc55b-f1cb-41ac-a1b1-54026024397a bdev_register:aa159525-0f7a-4c84-9683-caafa7c81564 bdev_register:bf28efcb-faeb-4d20-81f7-04ac3b49b59d 00:11:11.932 12:30:54 -- json_config/json_config.sh@70 -- # local events_to_check 00:11:11.932 12:30:54 -- json_config/json_config.sh@71 -- # local recorded_events 00:11:11.932 12:30:54 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:11:11.932 12:30:54 -- json_config/json_config.sh@74 -- # sort 00:11:11.932 12:30:54 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:f5ee780e-7648-4c93-a2da-7285231d957f bdev_register:ebacc55b-f1cb-41ac-a1b1-54026024397a bdev_register:aa159525-0f7a-4c84-9683-caafa7c81564 bdev_register:bf28efcb-faeb-4d20-81f7-04ac3b49b59d 00:11:11.932 12:30:54 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:11:11.932 12:30:54 -- json_config/json_config.sh@75 -- # get_notifications 00:11:11.932 12:30:54 -- json_config/json_config.sh@75 -- # sort 00:11:11.932 12:30:54 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:11:11.932 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:11.932 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:11.932 12:30:54 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:11:11.932 12:30:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:11:11.932 12:30:54 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.192 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.192 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.193 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:f5ee780e-7648-4c93-a2da-7285231d957f 00:11:12.193 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.193 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.193 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:ebacc55b-f1cb-41ac-a1b1-54026024397a 00:11:12.193 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.193 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.193 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:aa159525-0f7a-4c84-9683-caafa7c81564 00:11:12.193 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.193 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.193 12:30:54 -- json_config/json_config.sh@65 -- # echo bdev_register:bf28efcb-faeb-4d20-81f7-04ac3b49b59d 00:11:12.193 12:30:54 -- json_config/json_config.sh@64 -- # IFS=: 00:11:12.193 12:30:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:11:12.193 12:30:54 -- json_config/json_config.sh@77 -- # [[ bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aa159525-0f7a-4c84-9683-caafa7c81564 bdev_register:aio_disk bdev_register:bf28efcb-faeb-4d20-81f7-04ac3b49b59d bdev_register:ebacc55b-f1cb-41ac-a1b1-54026024397a bdev_register:f5ee780e-7648-4c93-a2da-7285231d957f != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\a\1\5\9\5\2\5\-\0\f\7\a\-\4\c\8\4\-\9\6\8\3\-\c\a\a\f\a\7\c\8\1\5\6\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\f\2\8\e\f\c\b\-\f\a\e\b\-\4\d\2\0\-\8\1\f\7\-\0\4\a\c\3\b\4\9\b\5\9\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\b\a\c\c\5\5\b\-\f\1\c\b\-\4\1\a\c\-\a\1\b\1\-\5\4\0\2\6\0\2\4\3\9\7\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\5\e\e\7\8\0\e\-\7\6\4\8\-\4\c\9\3\-\a\2\d\a\-\7\2\8\5\2\3\1\d\9\5\7\f ]] 00:11:12.193 12:30:54 -- json_config/json_config.sh@89 -- # cat 00:11:12.193 12:30:54 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aa159525-0f7a-4c84-9683-caafa7c81564 bdev_register:aio_disk bdev_register:bf28efcb-faeb-4d20-81f7-04ac3b49b59d bdev_register:ebacc55b-f1cb-41ac-a1b1-54026024397a bdev_register:f5ee780e-7648-4c93-a2da-7285231d957f 00:11:12.193 Expected events matched: 00:11:12.193 bdev_register:Malloc0 00:11:12.193 bdev_register:Malloc0p0 00:11:12.193 bdev_register:Malloc0p1 00:11:12.193 bdev_register:Malloc0p2 00:11:12.193 bdev_register:Malloc1 00:11:12.193 bdev_register:Malloc3 00:11:12.193 bdev_register:Null0 00:11:12.193 bdev_register:Nvme0n1 00:11:12.193 bdev_register:Nvme0n1p0 00:11:12.193 bdev_register:Nvme0n1p1 00:11:12.193 bdev_register:PTBdevFromMalloc3 00:11:12.193 bdev_register:aa159525-0f7a-4c84-9683-caafa7c81564 00:11:12.193 bdev_register:aio_disk 00:11:12.193 bdev_register:bf28efcb-faeb-4d20-81f7-04ac3b49b59d 00:11:12.193 bdev_register:ebacc55b-f1cb-41ac-a1b1-54026024397a 00:11:12.193 bdev_register:f5ee780e-7648-4c93-a2da-7285231d957f 00:11:12.193 12:30:54 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:11:12.193 12:30:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:12.193 12:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.193 12:30:54 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:11:12.193 12:30:54 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:11:12.193 12:30:54 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:11:12.193 12:30:54 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:11:12.193 12:30:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:12.193 12:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.193 12:30:54 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:11:12.193 12:30:54 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:12.193 12:30:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:12.451 MallocBdevForConfigChangeCheck 00:11:12.451 12:30:54 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:11:12.451 12:30:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:12.451 12:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.451 12:30:54 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:11:12.451 12:30:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:12.710 INFO: shutting down applications... 00:11:12.710 12:30:55 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:11:12.710 12:30:55 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:11:12.710 12:30:55 -- json_config/json_config.sh@431 -- # json_config_clear target 00:11:12.710 12:30:55 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:11:12.710 12:30:55 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:11:12.969 [2024-10-01 12:30:55.290943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:11:12.969 Calling clear_vhost_scsi_subsystem 00:11:12.969 Calling clear_iscsi_subsystem 00:11:12.969 Calling clear_vhost_blk_subsystem 00:11:12.969 Calling clear_nbd_subsystem 00:11:12.969 Calling clear_nvmf_subsystem 00:11:12.969 Calling clear_bdev_subsystem 00:11:12.969 Calling clear_accel_subsystem 00:11:12.969 Calling clear_iobuf_subsystem 00:11:12.969 Calling clear_sock_subsystem 00:11:12.969 Calling clear_vmd_subsystem 00:11:12.969 Calling clear_scheduler_subsystem 00:11:12.969 12:30:55 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:11:12.969 12:30:55 -- json_config/json_config.sh@396 -- # count=100 00:11:12.969 12:30:55 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:11:12.969 12:30:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:12.969 12:30:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:11:12.969 12:30:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:11:13.536 12:30:55 -- json_config/json_config.sh@398 -- # break 00:11:13.536 12:30:55 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:11:13.536 12:30:55 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:11:13.536 12:30:55 -- json_config/json_config.sh@120 -- # local app=target 00:11:13.536 12:30:55 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:11:13.536 12:30:55 -- json_config/json_config.sh@124 -- # [[ -n 103875 ]] 00:11:13.536 12:30:55 -- json_config/json_config.sh@127 -- # kill -SIGINT 103875 00:11:13.536 12:30:55 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:11:13.536 12:30:55 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:11:13.536 12:30:55 -- json_config/json_config.sh@130 -- # kill -0 103875 00:11:13.536 12:30:55 -- json_config/json_config.sh@134 -- # sleep 0.5 00:11:13.799 12:30:56 -- json_config/json_config.sh@129 -- # (( i++ )) 00:11:13.799 12:30:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:11:13.799 12:30:56 -- json_config/json_config.sh@130 -- # kill -0 103875 00:11:13.799 12:30:56 -- json_config/json_config.sh@134 -- # sleep 0.5 00:11:14.367 12:30:56 -- json_config/json_config.sh@129 -- # (( i++ )) 00:11:14.367 12:30:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:11:14.367 12:30:56 -- json_config/json_config.sh@130 -- # kill -0 103875 00:11:14.367 12:30:56 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:11:14.368 12:30:56 -- json_config/json_config.sh@132 -- # break 00:11:14.368 12:30:56 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:11:14.368 SPDK target shutdown done 00:11:14.368 12:30:56 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:11:14.368 INFO: relaunching applications... 00:11:14.368 12:30:56 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:11:14.368 12:30:56 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:14.368 12:30:56 -- json_config/json_config.sh@98 -- # local app=target 00:11:14.368 12:30:56 -- json_config/json_config.sh@99 -- # shift 00:11:14.368 12:30:56 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:11:14.368 12:30:56 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:11:14.368 12:30:56 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:11:14.368 12:30:56 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:11:14.368 12:30:56 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:11:14.368 12:30:56 -- json_config/json_config.sh@111 -- # app_pid[$app]=104129 00:11:14.368 Waiting for target to run... 00:11:14.368 12:30:56 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:11:14.368 12:30:56 -- json_config/json_config.sh@114 -- # waitforlisten 104129 /var/tmp/spdk_tgt.sock 00:11:14.368 12:30:56 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:14.368 12:30:56 -- common/autotest_common.sh@819 -- # '[' -z 104129 ']' 00:11:14.368 12:30:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:14.368 12:30:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:14.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:14.368 12:30:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:14.368 12:30:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:14.368 12:30:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.368 [2024-10-01 12:30:56.883801] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:14.368 [2024-10-01 12:30:56.883964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104129 ] 00:11:14.934 [2024-10-01 12:30:57.298044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.192 [2024-10-01 12:30:57.481136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:15.192 [2024-10-01 12:30:57.481356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.759 [2024-10-01 12:30:58.270449] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:11:15.759 [2024-10-01 12:30:58.270554] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:11:15.759 [2024-10-01 12:30:58.278405] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:11:15.759 [2024-10-01 12:30:58.278448] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:11:15.759 [2024-10-01 12:30:58.286425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:15.759 [2024-10-01 12:30:58.286480] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:15.759 [2024-10-01 12:30:58.286508] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:16.017 [2024-10-01 12:30:58.381066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:16.017 [2024-10-01 12:30:58.381187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.017 [2024-10-01 12:30:58.381221] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:16.017 [2024-10-01 12:30:58.381246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.017 [2024-10-01 12:30:58.381663] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.017 [2024-10-01 12:30:58.381705] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:11:16.586 12:30:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:16.586 12:30:59 -- common/autotest_common.sh@852 -- # return 0 00:11:16.586 00:11:16.586 12:30:59 -- json_config/json_config.sh@115 -- # echo '' 00:11:16.586 12:30:59 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:11:16.586 12:30:59 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:11:16.586 INFO: Checking if target configuration is the same... 00:11:16.586 12:30:59 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:16.586 12:30:59 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:11:16.586 12:30:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:16.586 + '[' 2 -ne 2 ']' 00:11:16.586 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:16.586 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:16.586 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:16.586 +++ basename /dev/fd/62 00:11:16.586 ++ mktemp /tmp/62.XXX 00:11:16.586 + tmp_file_1=/tmp/62.iC1 00:11:16.586 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:16.586 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:16.586 + tmp_file_2=/tmp/spdk_tgt_config.json.CUi 00:11:16.586 + ret=0 00:11:16.586 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:16.844 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:17.103 + diff -u /tmp/62.iC1 /tmp/spdk_tgt_config.json.CUi 00:11:17.103 INFO: JSON config files are the same 00:11:17.103 + echo 'INFO: JSON config files are the same' 00:11:17.103 + rm /tmp/62.iC1 /tmp/spdk_tgt_config.json.CUi 00:11:17.103 + exit 0 00:11:17.103 12:30:59 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:11:17.103 INFO: changing configuration and checking if this can be detected... 00:11:17.103 12:30:59 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:11:17.103 12:30:59 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:17.103 12:30:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:17.103 12:30:59 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:17.103 12:30:59 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:11:17.103 12:30:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:17.103 + '[' 2 -ne 2 ']' 00:11:17.103 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:17.103 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:17.103 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:17.103 +++ basename /dev/fd/62 00:11:17.103 ++ mktemp /tmp/62.XXX 00:11:17.362 + tmp_file_1=/tmp/62.WZx 00:11:17.362 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:17.362 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:17.362 + tmp_file_2=/tmp/spdk_tgt_config.json.ggO 00:11:17.362 + ret=0 00:11:17.362 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:17.621 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:17.621 + diff -u /tmp/62.WZx /tmp/spdk_tgt_config.json.ggO 00:11:17.621 + ret=1 00:11:17.621 + echo '=== Start of file: /tmp/62.WZx ===' 00:11:17.621 + cat /tmp/62.WZx 00:11:17.621 + echo '=== End of file: /tmp/62.WZx ===' 00:11:17.621 + echo '' 00:11:17.621 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ggO ===' 00:11:17.621 + cat /tmp/spdk_tgt_config.json.ggO 00:11:17.621 + echo '=== End of file: /tmp/spdk_tgt_config.json.ggO ===' 00:11:17.621 + echo '' 00:11:17.621 + rm /tmp/62.WZx /tmp/spdk_tgt_config.json.ggO 00:11:17.621 + exit 1 00:11:17.621 INFO: configuration change detected. 00:11:17.621 12:30:59 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:11:17.621 12:30:59 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:11:17.621 12:30:59 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:11:17.621 12:30:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:17.621 12:30:59 -- common/autotest_common.sh@10 -- # set +x 00:11:17.621 12:31:00 -- json_config/json_config.sh@360 -- # local ret=0 00:11:17.621 12:31:00 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:11:17.621 12:31:00 -- json_config/json_config.sh@370 -- # [[ -n 104129 ]] 00:11:17.621 12:31:00 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:11:17.621 12:31:00 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:11:17.621 12:31:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:17.621 12:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:17.621 12:31:00 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:11:17.621 12:31:00 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:11:17.621 12:31:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:11:17.881 12:31:00 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:11:17.881 12:31:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:11:17.881 12:31:00 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:11:17.881 12:31:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:11:18.140 12:31:00 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:11:18.140 12:31:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:11:18.437 12:31:00 -- json_config/json_config.sh@246 -- # uname -s 00:11:18.437 12:31:00 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:11:18.437 12:31:00 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:11:18.437 12:31:00 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:11:18.437 12:31:00 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:11:18.437 12:31:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:18.437 12:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:18.437 12:31:00 -- json_config/json_config.sh@376 -- # killprocess 104129 00:11:18.437 12:31:00 -- common/autotest_common.sh@926 -- # '[' -z 104129 ']' 00:11:18.437 12:31:00 -- common/autotest_common.sh@930 -- # kill -0 104129 00:11:18.437 12:31:00 -- common/autotest_common.sh@931 -- # uname 00:11:18.437 12:31:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:18.437 12:31:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104129 00:11:18.437 12:31:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:18.437 12:31:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:18.437 killing process with pid 104129 00:11:18.437 12:31:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104129' 00:11:18.437 12:31:00 -- common/autotest_common.sh@945 -- # kill 104129 00:11:18.437 12:31:00 -- common/autotest_common.sh@950 -- # wait 104129 00:11:19.375 12:31:01 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:19.375 12:31:01 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:11:19.375 12:31:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:19.375 12:31:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.634 12:31:01 -- json_config/json_config.sh@381 -- # return 0 00:11:19.634 INFO: Success 00:11:19.634 12:31:01 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:11:19.634 00:11:19.634 real 0m13.067s 00:11:19.634 user 0m17.661s 00:11:19.634 sys 0m2.806s 00:11:19.634 12:31:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.634 12:31:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.634 ************************************ 00:11:19.634 END TEST json_config 00:11:19.634 ************************************ 00:11:19.634 12:31:02 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:19.634 12:31:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:19.634 12:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.634 12:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:19.634 ************************************ 00:11:19.634 START TEST json_config_extra_key 00:11:19.634 ************************************ 00:11:19.634 12:31:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:19.634 12:31:02 -- nvmf/common.sh@7 -- # uname -s 00:11:19.634 12:31:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.634 12:31:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.634 12:31:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.634 12:31:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.634 12:31:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.634 12:31:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.634 12:31:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.634 12:31:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.634 12:31:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.634 12:31:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.634 12:31:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fdc71f59-5982-4281-8120-d7b6902c85fa 00:11:19.634 12:31:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=fdc71f59-5982-4281-8120-d7b6902c85fa 00:11:19.634 12:31:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.634 12:31:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.634 12:31:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:19.634 12:31:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:19.634 12:31:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.634 12:31:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.634 12:31:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.634 12:31:02 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:19.634 12:31:02 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:19.634 12:31:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:19.634 12:31:02 -- paths/export.sh@5 -- # export PATH 00:11:19.634 12:31:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:19.634 12:31:02 -- nvmf/common.sh@46 -- # : 0 00:11:19.634 12:31:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:19.634 12:31:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:19.634 12:31:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:19.634 12:31:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.634 12:31:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.634 12:31:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:19.634 12:31:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:19.634 12:31:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:19.634 INFO: launching applications... 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@25 -- # shift 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=104315 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:11:19.634 Waiting for target to run... 00:11:19.634 12:31:02 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:19.635 12:31:02 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 104315 /var/tmp/spdk_tgt.sock 00:11:19.635 12:31:02 -- common/autotest_common.sh@819 -- # '[' -z 104315 ']' 00:11:19.635 12:31:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:19.635 12:31:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:19.635 12:31:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:19.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:19.635 12:31:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:19.635 12:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:19.892 [2024-10-01 12:31:02.212199] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:19.892 [2024-10-01 12:31:02.212339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104315 ] 00:11:20.151 [2024-10-01 12:31:02.605932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.410 [2024-10-01 12:31:02.791369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:20.410 [2024-10-01 12:31:02.791554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.346 12:31:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:21.346 00:11:21.346 12:31:03 -- common/autotest_common.sh@852 -- # return 0 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:11:21.346 INFO: shutting down applications... 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 104315 ]] 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 104315 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104315 00:11:21.346 12:31:03 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:11:21.916 12:31:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:11:21.916 12:31:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:11:21.916 12:31:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104315 00:11:21.916 12:31:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:11:22.482 12:31:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:11:22.482 12:31:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:11:22.482 12:31:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104315 00:11:22.482 12:31:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:11:22.740 12:31:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:11:22.740 12:31:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:11:22.740 12:31:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104315 00:11:22.740 12:31:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:11:23.307 12:31:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:11:23.307 12:31:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:11:23.307 12:31:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104315 00:11:23.307 12:31:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:11:23.874 12:31:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:11:23.874 12:31:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:11:23.874 12:31:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104315 00:11:23.874 12:31:06 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:11:24.442 12:31:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:11:24.442 12:31:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:11:24.442 12:31:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104315 00:11:24.442 12:31:06 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:11:24.442 12:31:06 -- json_config/json_config_extra_key.sh@52 -- # break 00:11:24.442 12:31:06 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:11:24.442 SPDK target shutdown done 00:11:24.442 12:31:06 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:11:24.442 Success 00:11:24.442 12:31:06 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:11:24.442 00:11:24.442 real 0m4.707s 00:11:24.442 user 0m4.274s 00:11:24.442 sys 0m0.602s 00:11:24.442 12:31:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.442 12:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:24.442 ************************************ 00:11:24.442 END TEST json_config_extra_key 00:11:24.442 ************************************ 00:11:24.442 12:31:06 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:24.442 12:31:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:24.442 12:31:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:24.442 12:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:24.442 ************************************ 00:11:24.442 START TEST alias_rpc 00:11:24.442 ************************************ 00:11:24.442 12:31:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:24.442 * Looking for test storage... 00:11:24.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:24.442 12:31:06 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:24.442 12:31:06 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=104437 00:11:24.442 12:31:06 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:24.442 12:31:06 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 104437 00:11:24.442 12:31:06 -- common/autotest_common.sh@819 -- # '[' -z 104437 ']' 00:11:24.442 12:31:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.442 12:31:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:24.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.442 12:31:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.442 12:31:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:24.442 12:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:24.701 [2024-10-01 12:31:06.997079] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:24.701 [2024-10-01 12:31:06.997240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104437 ] 00:11:24.701 [2024-10-01 12:31:07.164012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.959 [2024-10-01 12:31:07.369948] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:24.959 [2024-10-01 12:31:07.370152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.335 12:31:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:26.335 12:31:08 -- common/autotest_common.sh@852 -- # return 0 00:11:26.335 12:31:08 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:26.335 12:31:08 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 104437 00:11:26.335 12:31:08 -- common/autotest_common.sh@926 -- # '[' -z 104437 ']' 00:11:26.335 12:31:08 -- common/autotest_common.sh@930 -- # kill -0 104437 00:11:26.335 12:31:08 -- common/autotest_common.sh@931 -- # uname 00:11:26.335 12:31:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:26.335 12:31:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104437 00:11:26.335 12:31:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:26.335 killing process with pid 104437 00:11:26.335 12:31:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:26.335 12:31:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104437' 00:11:26.335 12:31:08 -- common/autotest_common.sh@945 -- # kill 104437 00:11:26.335 12:31:08 -- common/autotest_common.sh@950 -- # wait 104437 00:11:28.886 00:11:28.886 real 0m4.351s 00:11:28.886 user 0m4.500s 00:11:28.886 sys 0m0.573s 00:11:28.886 12:31:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.886 12:31:11 -- common/autotest_common.sh@10 -- # set +x 00:11:28.886 ************************************ 00:11:28.886 END TEST alias_rpc 00:11:28.887 ************************************ 00:11:28.887 12:31:11 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:11:28.887 12:31:11 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:28.887 12:31:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:28.887 12:31:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.887 12:31:11 -- common/autotest_common.sh@10 -- # set +x 00:11:28.887 ************************************ 00:11:28.887 START TEST spdkcli_tcp 00:11:28.887 ************************************ 00:11:28.887 12:31:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:28.887 * Looking for test storage... 00:11:28.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:28.887 12:31:11 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:28.887 12:31:11 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:28.887 12:31:11 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:28.887 12:31:11 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:28.887 12:31:11 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:28.887 12:31:11 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:28.887 12:31:11 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:28.887 12:31:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:28.887 12:31:11 -- common/autotest_common.sh@10 -- # set +x 00:11:28.887 12:31:11 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104550 00:11:28.887 12:31:11 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:28.887 12:31:11 -- spdkcli/tcp.sh@27 -- # waitforlisten 104550 00:11:28.887 12:31:11 -- common/autotest_common.sh@819 -- # '[' -z 104550 ']' 00:11:28.887 12:31:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.887 12:31:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:28.887 12:31:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.887 12:31:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:28.887 12:31:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.147 [2024-10-01 12:31:11.444229] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:29.147 [2024-10-01 12:31:11.444370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104550 ] 00:11:29.147 [2024-10-01 12:31:11.613145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:29.406 [2024-10-01 12:31:11.810638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:29.406 [2024-10-01 12:31:11.810976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.406 [2024-10-01 12:31:11.810983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.788 12:31:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:30.788 12:31:12 -- common/autotest_common.sh@852 -- # return 0 00:11:30.788 12:31:12 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:30.788 12:31:12 -- spdkcli/tcp.sh@31 -- # socat_pid=104581 00:11:30.788 12:31:12 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:30.788 [ 00:11:30.788 "spdk_get_version", 00:11:30.788 "rpc_get_methods", 00:11:30.788 "trace_get_info", 00:11:30.788 "trace_get_tpoint_group_mask", 00:11:30.788 "trace_disable_tpoint_group", 00:11:30.788 "trace_enable_tpoint_group", 00:11:30.788 "trace_clear_tpoint_mask", 00:11:30.788 "trace_set_tpoint_mask", 00:11:30.788 "framework_get_pci_devices", 00:11:30.788 "framework_get_config", 00:11:30.788 "framework_get_subsystems", 00:11:30.788 "iobuf_get_stats", 00:11:30.788 "iobuf_set_options", 00:11:30.788 "sock_set_default_impl", 00:11:30.788 "sock_impl_set_options", 00:11:30.788 "sock_impl_get_options", 00:11:30.788 "vmd_rescan", 00:11:30.788 "vmd_remove_device", 00:11:30.788 "vmd_enable", 00:11:30.788 "accel_get_stats", 00:11:30.788 "accel_set_options", 00:11:30.788 "accel_set_driver", 00:11:30.788 "accel_crypto_key_destroy", 00:11:30.788 "accel_crypto_keys_get", 00:11:30.788 "accel_crypto_key_create", 00:11:30.788 "accel_assign_opc", 00:11:30.788 "accel_get_module_info", 00:11:30.788 "accel_get_opc_assignments", 00:11:30.788 "notify_get_notifications", 00:11:30.788 "notify_get_types", 00:11:30.788 "bdev_get_histogram", 00:11:30.788 "bdev_enable_histogram", 00:11:30.788 "bdev_set_qos_limit", 00:11:30.788 "bdev_set_qd_sampling_period", 00:11:30.788 "bdev_get_bdevs", 00:11:30.788 "bdev_reset_iostat", 00:11:30.788 "bdev_get_iostat", 00:11:30.788 "bdev_examine", 00:11:30.788 "bdev_wait_for_examine", 00:11:30.788 "bdev_set_options", 00:11:30.788 "scsi_get_devices", 00:11:30.788 "thread_set_cpumask", 00:11:30.789 "framework_get_scheduler", 00:11:30.789 "framework_set_scheduler", 00:11:30.789 "framework_get_reactors", 00:11:30.789 "thread_get_io_channels", 00:11:30.789 "thread_get_pollers", 00:11:30.789 "thread_get_stats", 00:11:30.789 "framework_monitor_context_switch", 00:11:30.789 "spdk_kill_instance", 00:11:30.789 "log_enable_timestamps", 00:11:30.789 "log_get_flags", 00:11:30.789 "log_clear_flag", 00:11:30.789 "log_set_flag", 00:11:30.789 "log_get_level", 00:11:30.789 "log_set_level", 00:11:30.789 "log_get_print_level", 00:11:30.789 "log_set_print_level", 00:11:30.789 "framework_enable_cpumask_locks", 00:11:30.789 "framework_disable_cpumask_locks", 00:11:30.789 "framework_wait_init", 00:11:30.789 "framework_start_init", 00:11:30.789 "virtio_blk_create_transport", 00:11:30.789 "virtio_blk_get_transports", 00:11:30.789 "vhost_controller_set_coalescing", 00:11:30.789 "vhost_get_controllers", 00:11:30.789 "vhost_delete_controller", 00:11:30.789 "vhost_create_blk_controller", 00:11:30.789 "vhost_scsi_controller_remove_target", 00:11:30.789 "vhost_scsi_controller_add_target", 00:11:30.789 "vhost_start_scsi_controller", 00:11:30.789 "vhost_create_scsi_controller", 00:11:30.789 "nbd_get_disks", 00:11:30.789 "nbd_stop_disk", 00:11:30.789 "nbd_start_disk", 00:11:30.789 "env_dpdk_get_mem_stats", 00:11:30.789 "nvmf_subsystem_get_listeners", 00:11:30.789 "nvmf_subsystem_get_qpairs", 00:11:30.789 "nvmf_subsystem_get_controllers", 00:11:30.789 "nvmf_get_stats", 00:11:30.789 "nvmf_get_transports", 00:11:30.789 "nvmf_create_transport", 00:11:30.789 "nvmf_get_targets", 00:11:30.789 "nvmf_delete_target", 00:11:30.789 "nvmf_create_target", 00:11:30.789 "nvmf_subsystem_allow_any_host", 00:11:30.789 "nvmf_subsystem_remove_host", 00:11:30.789 "nvmf_subsystem_add_host", 00:11:30.789 "nvmf_subsystem_remove_ns", 00:11:30.789 "nvmf_subsystem_add_ns", 00:11:30.789 "nvmf_subsystem_listener_set_ana_state", 00:11:30.789 "nvmf_discovery_get_referrals", 00:11:30.789 "nvmf_discovery_remove_referral", 00:11:30.789 "nvmf_discovery_add_referral", 00:11:30.789 "nvmf_subsystem_remove_listener", 00:11:30.789 "nvmf_subsystem_add_listener", 00:11:30.789 "nvmf_delete_subsystem", 00:11:30.789 "nvmf_create_subsystem", 00:11:30.789 "nvmf_get_subsystems", 00:11:30.789 "nvmf_set_crdt", 00:11:30.789 "nvmf_set_config", 00:11:30.789 "nvmf_set_max_subsystems", 00:11:30.789 "iscsi_set_options", 00:11:30.789 "iscsi_get_auth_groups", 00:11:30.789 "iscsi_auth_group_remove_secret", 00:11:30.789 "iscsi_auth_group_add_secret", 00:11:30.789 "iscsi_delete_auth_group", 00:11:30.789 "iscsi_create_auth_group", 00:11:30.789 "iscsi_set_discovery_auth", 00:11:30.789 "iscsi_get_options", 00:11:30.789 "iscsi_target_node_request_logout", 00:11:30.789 "iscsi_target_node_set_redirect", 00:11:30.789 "iscsi_target_node_set_auth", 00:11:30.789 "iscsi_target_node_add_lun", 00:11:30.789 "iscsi_get_connections", 00:11:30.789 "iscsi_portal_group_set_auth", 00:11:30.789 "iscsi_start_portal_group", 00:11:30.789 "iscsi_delete_portal_group", 00:11:30.789 "iscsi_create_portal_group", 00:11:30.789 "iscsi_get_portal_groups", 00:11:30.789 "iscsi_delete_target_node", 00:11:30.789 "iscsi_target_node_remove_pg_ig_maps", 00:11:30.789 "iscsi_target_node_add_pg_ig_maps", 00:11:30.789 "iscsi_create_target_node", 00:11:30.789 "iscsi_get_target_nodes", 00:11:30.789 "iscsi_delete_initiator_group", 00:11:30.789 "iscsi_initiator_group_remove_initiators", 00:11:30.789 "iscsi_initiator_group_add_initiators", 00:11:30.789 "iscsi_create_initiator_group", 00:11:30.789 "iscsi_get_initiator_groups", 00:11:30.789 "iaa_scan_accel_module", 00:11:30.789 "dsa_scan_accel_module", 00:11:30.789 "ioat_scan_accel_module", 00:11:30.789 "accel_error_inject_error", 00:11:30.789 "bdev_iscsi_delete", 00:11:30.789 "bdev_iscsi_create", 00:11:30.789 "bdev_iscsi_set_options", 00:11:30.789 "bdev_virtio_attach_controller", 00:11:30.789 "bdev_virtio_scsi_get_devices", 00:11:30.789 "bdev_virtio_detach_controller", 00:11:30.789 "bdev_virtio_blk_set_hotplug", 00:11:30.789 "bdev_ftl_set_property", 00:11:30.789 "bdev_ftl_get_properties", 00:11:30.789 "bdev_ftl_get_stats", 00:11:30.789 "bdev_ftl_unmap", 00:11:30.789 "bdev_ftl_unload", 00:11:30.789 "bdev_ftl_delete", 00:11:30.789 "bdev_ftl_load", 00:11:30.789 "bdev_ftl_create", 00:11:30.789 "bdev_aio_delete", 00:11:30.789 "bdev_aio_rescan", 00:11:30.789 "bdev_aio_create", 00:11:30.789 "blobfs_create", 00:11:30.789 "blobfs_detect", 00:11:30.789 "blobfs_set_cache_size", 00:11:30.789 "bdev_zone_block_delete", 00:11:30.789 "bdev_zone_block_create", 00:11:30.789 "bdev_delay_delete", 00:11:30.789 "bdev_delay_create", 00:11:30.789 "bdev_delay_update_latency", 00:11:30.789 "bdev_split_delete", 00:11:30.789 "bdev_split_create", 00:11:30.789 "bdev_error_inject_error", 00:11:30.789 "bdev_error_delete", 00:11:30.789 "bdev_error_create", 00:11:30.789 "bdev_raid_set_options", 00:11:30.789 "bdev_raid_remove_base_bdev", 00:11:30.789 "bdev_raid_add_base_bdev", 00:11:30.789 "bdev_raid_delete", 00:11:30.789 "bdev_raid_create", 00:11:30.789 "bdev_raid_get_bdevs", 00:11:30.789 "bdev_lvol_grow_lvstore", 00:11:30.789 "bdev_lvol_get_lvols", 00:11:30.789 "bdev_lvol_get_lvstores", 00:11:30.789 "bdev_lvol_delete", 00:11:30.789 "bdev_lvol_set_read_only", 00:11:30.789 "bdev_lvol_resize", 00:11:30.789 "bdev_lvol_decouple_parent", 00:11:30.789 "bdev_lvol_inflate", 00:11:30.789 "bdev_lvol_rename", 00:11:30.789 "bdev_lvol_clone_bdev", 00:11:30.789 "bdev_lvol_clone", 00:11:30.789 "bdev_lvol_snapshot", 00:11:30.789 "bdev_lvol_create", 00:11:30.789 "bdev_lvol_delete_lvstore", 00:11:30.789 "bdev_lvol_rename_lvstore", 00:11:30.789 "bdev_lvol_create_lvstore", 00:11:30.789 "bdev_passthru_delete", 00:11:30.789 "bdev_passthru_create", 00:11:30.789 "bdev_nvme_cuse_unregister", 00:11:30.789 "bdev_nvme_cuse_register", 00:11:30.789 "bdev_opal_new_user", 00:11:30.789 "bdev_opal_set_lock_state", 00:11:30.789 "bdev_opal_delete", 00:11:30.789 "bdev_opal_get_info", 00:11:30.789 "bdev_opal_create", 00:11:30.789 "bdev_nvme_opal_revert", 00:11:30.789 "bdev_nvme_opal_init", 00:11:30.789 "bdev_nvme_send_cmd", 00:11:30.789 "bdev_nvme_get_path_iostat", 00:11:30.789 "bdev_nvme_get_mdns_discovery_info", 00:11:30.789 "bdev_nvme_stop_mdns_discovery", 00:11:30.789 "bdev_nvme_start_mdns_discovery", 00:11:30.789 "bdev_nvme_set_multipath_policy", 00:11:30.789 "bdev_nvme_set_preferred_path", 00:11:30.789 "bdev_nvme_get_io_paths", 00:11:30.789 "bdev_nvme_remove_error_injection", 00:11:30.789 "bdev_nvme_add_error_injection", 00:11:30.789 "bdev_nvme_get_discovery_info", 00:11:30.789 "bdev_nvme_stop_discovery", 00:11:30.789 "bdev_nvme_start_discovery", 00:11:30.789 "bdev_nvme_get_controller_health_info", 00:11:30.789 "bdev_nvme_disable_controller", 00:11:30.789 "bdev_nvme_enable_controller", 00:11:30.789 "bdev_nvme_reset_controller", 00:11:30.789 "bdev_nvme_get_transport_statistics", 00:11:30.789 "bdev_nvme_apply_firmware", 00:11:30.789 "bdev_nvme_detach_controller", 00:11:30.789 "bdev_nvme_get_controllers", 00:11:30.789 "bdev_nvme_attach_controller", 00:11:30.789 "bdev_nvme_set_hotplug", 00:11:30.789 "bdev_nvme_set_options", 00:11:30.789 "bdev_null_resize", 00:11:30.789 "bdev_null_delete", 00:11:30.789 "bdev_null_create", 00:11:30.789 "bdev_malloc_delete", 00:11:30.789 "bdev_malloc_create" 00:11:30.789 ] 00:11:30.789 12:31:13 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:30.789 12:31:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:30.789 12:31:13 -- common/autotest_common.sh@10 -- # set +x 00:11:30.789 12:31:13 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:30.789 12:31:13 -- spdkcli/tcp.sh@38 -- # killprocess 104550 00:11:30.789 12:31:13 -- common/autotest_common.sh@926 -- # '[' -z 104550 ']' 00:11:30.789 12:31:13 -- common/autotest_common.sh@930 -- # kill -0 104550 00:11:30.789 12:31:13 -- common/autotest_common.sh@931 -- # uname 00:11:30.789 12:31:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:30.789 12:31:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104550 00:11:30.789 killing process with pid 104550 00:11:30.789 12:31:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:30.789 12:31:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:30.789 12:31:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104550' 00:11:30.789 12:31:13 -- common/autotest_common.sh@945 -- # kill 104550 00:11:30.789 12:31:13 -- common/autotest_common.sh@950 -- # wait 104550 00:11:33.328 00:11:33.328 real 0m4.417s 00:11:33.328 user 0m7.987s 00:11:33.328 sys 0m0.620s 00:11:33.328 12:31:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.328 12:31:15 -- common/autotest_common.sh@10 -- # set +x 00:11:33.328 ************************************ 00:11:33.328 END TEST spdkcli_tcp 00:11:33.328 ************************************ 00:11:33.328 12:31:15 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:33.328 12:31:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:33.328 12:31:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:33.328 12:31:15 -- common/autotest_common.sh@10 -- # set +x 00:11:33.328 ************************************ 00:11:33.328 START TEST dpdk_mem_utility 00:11:33.328 ************************************ 00:11:33.328 12:31:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:33.328 * Looking for test storage... 00:11:33.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:33.328 12:31:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:33.328 12:31:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104676 00:11:33.328 12:31:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:33.328 12:31:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104676 00:11:33.328 12:31:15 -- common/autotest_common.sh@819 -- # '[' -z 104676 ']' 00:11:33.328 12:31:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.328 12:31:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:33.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.328 12:31:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.328 12:31:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:33.328 12:31:15 -- common/autotest_common.sh@10 -- # set +x 00:11:33.588 [2024-10-01 12:31:15.920392] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:33.588 [2024-10-01 12:31:15.920531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104676 ] 00:11:33.588 [2024-10-01 12:31:16.084019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.847 [2024-10-01 12:31:16.279399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:33.847 [2024-10-01 12:31:16.279583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.227 12:31:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:35.227 12:31:17 -- common/autotest_common.sh@852 -- # return 0 00:11:35.227 12:31:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:35.227 12:31:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:35.227 12:31:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.227 12:31:17 -- common/autotest_common.sh@10 -- # set +x 00:11:35.227 { 00:11:35.227 "filename": "/tmp/spdk_mem_dump.txt" 00:11:35.227 } 00:11:35.227 12:31:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.227 12:31:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:35.227 DPDK memory size 820.000000 MiB in 1 heap(s) 00:11:35.227 1 heaps totaling size 820.000000 MiB 00:11:35.227 size: 820.000000 MiB heap id: 0 00:11:35.227 end heaps---------- 00:11:35.227 8 mempools totaling size 598.116089 MiB 00:11:35.227 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:35.227 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:35.227 size: 84.521057 MiB name: bdev_io_104676 00:11:35.227 size: 51.011292 MiB name: evtpool_104676 00:11:35.227 size: 50.003479 MiB name: msgpool_104676 00:11:35.227 size: 21.763794 MiB name: PDU_Pool 00:11:35.227 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:35.227 size: 0.026123 MiB name: Session_Pool 00:11:35.227 end mempools------- 00:11:35.227 6 memzones totaling size 4.142822 MiB 00:11:35.227 size: 1.000366 MiB name: RG_ring_0_104676 00:11:35.227 size: 1.000366 MiB name: RG_ring_1_104676 00:11:35.227 size: 1.000366 MiB name: RG_ring_4_104676 00:11:35.227 size: 1.000366 MiB name: RG_ring_5_104676 00:11:35.227 size: 0.125366 MiB name: RG_ring_2_104676 00:11:35.227 size: 0.015991 MiB name: RG_ring_3_104676 00:11:35.227 end memzones------- 00:11:35.227 12:31:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:35.227 heap id: 0 total size: 820.000000 MiB number of busy elements: 218 number of free elements: 18 00:11:35.227 list of free elements. size: 18.471680 MiB 00:11:35.227 element at address: 0x200000400000 with size: 1.999451 MiB 00:11:35.227 element at address: 0x200000800000 with size: 1.996887 MiB 00:11:35.227 element at address: 0x200007000000 with size: 1.995972 MiB 00:11:35.227 element at address: 0x20000b200000 with size: 1.995972 MiB 00:11:35.227 element at address: 0x200019100040 with size: 0.999939 MiB 00:11:35.227 element at address: 0x200019500040 with size: 0.999939 MiB 00:11:35.227 element at address: 0x200019600000 with size: 0.999329 MiB 00:11:35.227 element at address: 0x200003e00000 with size: 0.996094 MiB 00:11:35.228 element at address: 0x200032200000 with size: 0.994324 MiB 00:11:35.228 element at address: 0x200018e00000 with size: 0.959656 MiB 00:11:35.228 element at address: 0x200019900040 with size: 0.937256 MiB 00:11:35.228 element at address: 0x200000200000 with size: 0.835083 MiB 00:11:35.228 element at address: 0x20001b000000 with size: 0.562195 MiB 00:11:35.228 element at address: 0x200019200000 with size: 0.489197 MiB 00:11:35.228 element at address: 0x200019a00000 with size: 0.485413 MiB 00:11:35.228 element at address: 0x200013800000 with size: 0.469116 MiB 00:11:35.228 element at address: 0x200028400000 with size: 0.399719 MiB 00:11:35.228 element at address: 0x200003a00000 with size: 0.356140 MiB 00:11:35.228 list of standard malloc elements. size: 199.263916 MiB 00:11:35.228 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:11:35.228 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:11:35.228 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:11:35.228 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:11:35.228 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:11:35.228 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:11:35.228 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:11:35.228 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:11:35.228 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:11:35.228 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:11:35.228 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:11:35.228 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:11:35.228 element at address: 0x200003aff980 with size: 0.000244 MiB 00:11:35.228 element at address: 0x200003affa80 with size: 0.000244 MiB 00:11:35.228 element at address: 0x200003eff000 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x200013878180 with size: 0.000244 MiB 00:11:35.228 element at address: 0x200013878280 with size: 0.000244 MiB 00:11:35.228 element at address: 0x200013878380 with size: 0.000244 MiB 00:11:35.228 element at address: 0x200013878480 with size: 0.000244 MiB 00:11:35.228 element at address: 0x200013878580 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:11:35.228 element at address: 0x200019abc680 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:11:35.228 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:11:35.229 element at address: 0x200028466540 with size: 0.000244 MiB 00:11:35.229 element at address: 0x200028466640 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846d300 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846d580 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846d680 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846d780 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846d880 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846d980 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846da80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846db80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846de80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846df80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846e080 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846e180 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846e280 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846e380 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846e480 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846e580 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846e680 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846e780 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846e880 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846e980 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846f080 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846f180 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846f280 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846f380 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846f480 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846f580 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846f680 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846f780 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846f880 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846f980 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:11:35.229 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:11:35.229 list of memzone associated elements. size: 602.264404 MiB 00:11:35.229 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:11:35.229 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:35.229 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:11:35.229 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:35.229 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:11:35.229 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_104676_0 00:11:35.229 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:11:35.229 associated memzone info: size: 48.002930 MiB name: MP_evtpool_104676_0 00:11:35.229 element at address: 0x200003fff340 with size: 48.003113 MiB 00:11:35.229 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104676_0 00:11:35.229 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:11:35.229 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:35.229 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:11:35.229 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:35.229 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:11:35.229 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_104676 00:11:35.229 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:11:35.229 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104676 00:11:35.229 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:11:35.229 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104676 00:11:35.229 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:11:35.229 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:35.229 element at address: 0x200019abc780 with size: 1.008179 MiB 00:11:35.229 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:35.229 element at address: 0x200018efde00 with size: 1.008179 MiB 00:11:35.229 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:35.229 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:11:35.229 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:35.229 element at address: 0x200003eff100 with size: 1.000549 MiB 00:11:35.229 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104676 00:11:35.229 element at address: 0x200003affb80 with size: 1.000549 MiB 00:11:35.229 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104676 00:11:35.229 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:11:35.229 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104676 00:11:35.229 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:11:35.229 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104676 00:11:35.229 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:11:35.229 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104676 00:11:35.229 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:11:35.229 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:35.229 element at address: 0x200013878680 with size: 0.500549 MiB 00:11:35.229 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:35.229 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:11:35.229 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:35.229 element at address: 0x200003adf740 with size: 0.125549 MiB 00:11:35.229 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104676 00:11:35.229 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:11:35.229 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:35.229 element at address: 0x200028466740 with size: 0.023804 MiB 00:11:35.229 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:35.229 element at address: 0x200003adb500 with size: 0.016174 MiB 00:11:35.229 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104676 00:11:35.229 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:11:35.229 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:35.229 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:11:35.229 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104676 00:11:35.229 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:11:35.229 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104676 00:11:35.229 element at address: 0x20002846d400 with size: 0.000366 MiB 00:11:35.229 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:35.229 12:31:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:35.229 12:31:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104676 00:11:35.229 12:31:17 -- common/autotest_common.sh@926 -- # '[' -z 104676 ']' 00:11:35.229 12:31:17 -- common/autotest_common.sh@930 -- # kill -0 104676 00:11:35.229 12:31:17 -- common/autotest_common.sh@931 -- # uname 00:11:35.229 12:31:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:35.229 12:31:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104676 00:11:35.229 12:31:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:35.229 12:31:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:35.229 12:31:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104676' 00:11:35.229 killing process with pid 104676 00:11:35.230 12:31:17 -- common/autotest_common.sh@945 -- # kill 104676 00:11:35.230 12:31:17 -- common/autotest_common.sh@950 -- # wait 104676 00:11:37.766 00:11:37.766 real 0m4.205s 00:11:37.766 user 0m4.217s 00:11:37.766 sys 0m0.578s 00:11:37.766 12:31:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.766 ************************************ 00:11:37.766 END TEST dpdk_mem_utility 00:11:37.766 ************************************ 00:11:37.767 12:31:19 -- common/autotest_common.sh@10 -- # set +x 00:11:37.767 12:31:19 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:37.767 12:31:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:37.767 12:31:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:37.767 12:31:19 -- common/autotest_common.sh@10 -- # set +x 00:11:37.767 ************************************ 00:11:37.767 START TEST event 00:11:37.767 ************************************ 00:11:37.767 12:31:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:37.767 * Looking for test storage... 00:11:37.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:37.767 12:31:20 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:37.767 12:31:20 -- bdev/nbd_common.sh@6 -- # set -e 00:11:37.767 12:31:20 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:37.767 12:31:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:37.767 12:31:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:37.767 12:31:20 -- common/autotest_common.sh@10 -- # set +x 00:11:37.767 ************************************ 00:11:37.767 START TEST event_perf 00:11:37.767 ************************************ 00:11:37.767 12:31:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:37.767 Running I/O for 1 seconds...[2024-10-01 12:31:20.197417] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:37.767 [2024-10-01 12:31:20.197544] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104795 ] 00:11:38.026 [2024-10-01 12:31:20.376830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.285 [2024-10-01 12:31:20.572474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.285 [2024-10-01 12:31:20.572686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.285 [2024-10-01 12:31:20.572879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.285 [2024-10-01 12:31:20.572894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.662 Running I/O for 1 seconds... 00:11:39.662 lcore 0: 202136 00:11:39.662 lcore 1: 202136 00:11:39.662 lcore 2: 202136 00:11:39.662 lcore 3: 202135 00:11:39.662 done. 00:11:39.662 00:11:39.662 real 0m1.874s 00:11:39.662 user 0m4.619s 00:11:39.662 sys 0m0.136s 00:11:39.662 12:31:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.662 12:31:22 -- common/autotest_common.sh@10 -- # set +x 00:11:39.662 ************************************ 00:11:39.662 END TEST event_perf 00:11:39.662 ************************************ 00:11:39.662 12:31:22 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:39.662 12:31:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:39.662 12:31:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:39.662 12:31:22 -- common/autotest_common.sh@10 -- # set +x 00:11:39.662 ************************************ 00:11:39.662 START TEST event_reactor 00:11:39.662 ************************************ 00:11:39.662 12:31:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:39.662 [2024-10-01 12:31:22.138929] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:39.662 [2024-10-01 12:31:22.139059] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104846 ] 00:11:39.922 [2024-10-01 12:31:22.304706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.209 [2024-10-01 12:31:22.496862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.589 test_start 00:11:41.589 oneshot 00:11:41.589 tick 100 00:11:41.589 tick 100 00:11:41.589 tick 250 00:11:41.589 tick 100 00:11:41.589 tick 100 00:11:41.589 tick 100 00:11:41.589 tick 250 00:11:41.589 tick 500 00:11:41.589 tick 100 00:11:41.589 tick 100 00:11:41.589 tick 250 00:11:41.589 tick 100 00:11:41.589 tick 100 00:11:41.589 test_end 00:11:41.589 00:11:41.589 real 0m1.817s 00:11:41.589 user 0m1.573s 00:11:41.589 sys 0m0.144s 00:11:41.589 12:31:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.589 ************************************ 00:11:41.589 END TEST event_reactor 00:11:41.589 ************************************ 00:11:41.589 12:31:23 -- common/autotest_common.sh@10 -- # set +x 00:11:41.589 12:31:23 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:41.589 12:31:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:41.589 12:31:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:41.589 12:31:23 -- common/autotest_common.sh@10 -- # set +x 00:11:41.589 ************************************ 00:11:41.589 START TEST event_reactor_perf 00:11:41.589 ************************************ 00:11:41.589 12:31:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:41.589 [2024-10-01 12:31:24.016892] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:41.589 [2024-10-01 12:31:24.017391] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104896 ] 00:11:41.848 [2024-10-01 12:31:24.183509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.848 [2024-10-01 12:31:24.370131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.228 test_start 00:11:43.228 test_end 00:11:43.228 Performance: 414870 events per second 00:11:43.487 00:11:43.487 real 0m1.807s 00:11:43.487 user 0m1.586s 00:11:43.487 sys 0m0.121s 00:11:43.487 12:31:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.487 ************************************ 00:11:43.487 END TEST event_reactor_perf 00:11:43.487 ************************************ 00:11:43.487 12:31:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.487 12:31:25 -- event/event.sh@49 -- # uname -s 00:11:43.487 12:31:25 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:43.487 12:31:25 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:43.487 12:31:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:43.487 12:31:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:43.487 12:31:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.487 ************************************ 00:11:43.487 START TEST event_scheduler 00:11:43.487 ************************************ 00:11:43.487 12:31:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:43.487 * Looking for test storage... 00:11:43.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:43.487 12:31:25 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:43.487 12:31:25 -- scheduler/scheduler.sh@35 -- # scheduler_pid=104967 00:11:43.487 12:31:25 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:43.487 12:31:25 -- scheduler/scheduler.sh@37 -- # waitforlisten 104967 00:11:43.487 12:31:25 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:43.487 12:31:25 -- common/autotest_common.sh@819 -- # '[' -z 104967 ']' 00:11:43.487 12:31:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.487 12:31:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:43.488 12:31:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.488 12:31:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:43.488 12:31:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.746 [2024-10-01 12:31:26.031108] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:43.746 [2024-10-01 12:31:26.031433] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104967 ] 00:11:43.746 [2024-10-01 12:31:26.209001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.005 [2024-10-01 12:31:26.403666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.005 [2024-10-01 12:31:26.403972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.005 [2024-10-01 12:31:26.403869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.005 [2024-10-01 12:31:26.403989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.574 12:31:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:44.574 12:31:26 -- common/autotest_common.sh@852 -- # return 0 00:11:44.574 12:31:26 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:44.574 12:31:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.574 12:31:26 -- common/autotest_common.sh@10 -- # set +x 00:11:44.574 POWER: Env isn't set yet! 00:11:44.574 POWER: Attempting to initialise ACPI cpufreq power management... 00:11:44.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:44.574 POWER: Cannot set governor of lcore 0 to userspace 00:11:44.574 POWER: Attempting to initialise PSTAT power management... 00:11:44.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:44.574 POWER: Cannot set governor of lcore 0 to performance 00:11:44.574 POWER: Attempting to initialise AMD PSTATE power management... 00:11:44.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:44.574 POWER: Cannot set governor of lcore 0 to userspace 00:11:44.574 POWER: Attempting to initialise CPPC power management... 00:11:44.574 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:44.574 POWER: Cannot set governor of lcore 0 to userspace 00:11:44.574 POWER: Attempting to initialise VM power management... 00:11:44.574 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:44.574 POWER: Unable to set Power Management Environment for lcore 0 00:11:44.574 [2024-10-01 12:31:26.865544] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:11:44.574 [2024-10-01 12:31:26.865609] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:11:44.574 [2024-10-01 12:31:26.865627] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:11:44.574 [2024-10-01 12:31:26.865683] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:44.574 [2024-10-01 12:31:26.865710] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:44.574 [2024-10-01 12:31:26.865739] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:44.574 12:31:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.574 12:31:26 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:44.574 12:31:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.574 12:31:26 -- common/autotest_common.sh@10 -- # set +x 00:11:44.832 [2024-10-01 12:31:27.165657] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:44.832 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.832 12:31:27 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:44.832 12:31:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:44.832 12:31:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.832 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.832 ************************************ 00:11:44.832 START TEST scheduler_create_thread 00:11:44.833 ************************************ 00:11:44.833 12:31:27 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 2 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 3 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 4 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 5 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 6 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 7 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 8 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 9 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 10 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 12:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.833 12:31:27 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:44.833 12:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.833 12:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.739 12:31:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.739 12:31:28 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:46.739 12:31:28 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:46.739 12:31:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.739 12:31:28 -- common/autotest_common.sh@10 -- # set +x 00:11:47.362 12:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:47.362 00:11:47.362 real 0m2.627s 00:11:47.362 user 0m0.006s 00:11:47.362 sys 0m0.010s 00:11:47.362 12:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.362 ************************************ 00:11:47.362 END TEST scheduler_create_thread 00:11:47.362 ************************************ 00:11:47.362 12:31:29 -- common/autotest_common.sh@10 -- # set +x 00:11:47.362 12:31:29 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:47.362 12:31:29 -- scheduler/scheduler.sh@46 -- # killprocess 104967 00:11:47.362 12:31:29 -- common/autotest_common.sh@926 -- # '[' -z 104967 ']' 00:11:47.362 12:31:29 -- common/autotest_common.sh@930 -- # kill -0 104967 00:11:47.362 12:31:29 -- common/autotest_common.sh@931 -- # uname 00:11:47.362 12:31:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:47.362 12:31:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104967 00:11:47.362 12:31:29 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:47.362 12:31:29 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:47.362 killing process with pid 104967 00:11:47.362 12:31:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104967' 00:11:47.362 12:31:29 -- common/autotest_common.sh@945 -- # kill 104967 00:11:47.362 12:31:29 -- common/autotest_common.sh@950 -- # wait 104967 00:11:47.930 [2024-10-01 12:31:30.287376] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:49.308 00:11:49.308 real 0m5.606s 00:11:49.308 user 0m9.277s 00:11:49.308 sys 0m0.449s 00:11:49.308 12:31:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.308 12:31:31 -- common/autotest_common.sh@10 -- # set +x 00:11:49.308 ************************************ 00:11:49.308 END TEST event_scheduler 00:11:49.308 ************************************ 00:11:49.308 12:31:31 -- event/event.sh@51 -- # modprobe -n nbd 00:11:49.308 12:31:31 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:49.308 12:31:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:49.308 12:31:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:49.308 12:31:31 -- common/autotest_common.sh@10 -- # set +x 00:11:49.308 ************************************ 00:11:49.308 START TEST app_repeat 00:11:49.308 ************************************ 00:11:49.308 12:31:31 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:11:49.308 12:31:31 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:49.308 12:31:31 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:49.308 12:31:31 -- event/event.sh@13 -- # local nbd_list 00:11:49.308 12:31:31 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:49.308 12:31:31 -- event/event.sh@14 -- # local bdev_list 00:11:49.308 12:31:31 -- event/event.sh@15 -- # local repeat_times=4 00:11:49.308 12:31:31 -- event/event.sh@17 -- # modprobe nbd 00:11:49.308 12:31:31 -- event/event.sh@19 -- # repeat_pid=105106 00:11:49.308 12:31:31 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:49.308 12:31:31 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:49.308 Process app_repeat pid: 105106 00:11:49.308 12:31:31 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 105106' 00:11:49.308 spdk_app_start Round 0 00:11:49.308 12:31:31 -- event/event.sh@23 -- # for i in {0..2} 00:11:49.308 12:31:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:49.308 12:31:31 -- event/event.sh@25 -- # waitforlisten 105106 /var/tmp/spdk-nbd.sock 00:11:49.308 12:31:31 -- common/autotest_common.sh@819 -- # '[' -z 105106 ']' 00:11:49.308 12:31:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:49.308 12:31:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:49.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:49.308 12:31:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:49.308 12:31:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:49.308 12:31:31 -- common/autotest_common.sh@10 -- # set +x 00:11:49.308 [2024-10-01 12:31:31.591560] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:49.308 [2024-10-01 12:31:31.591707] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105106 ] 00:11:49.308 [2024-10-01 12:31:31.761368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:49.567 [2024-10-01 12:31:31.952847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.567 [2024-10-01 12:31:31.952847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.136 12:31:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:50.136 12:31:32 -- common/autotest_common.sh@852 -- # return 0 00:11:50.136 12:31:32 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:50.136 Malloc0 00:11:50.394 12:31:32 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:50.394 Malloc1 00:11:50.654 12:31:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@12 -- # local i 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:50.654 12:31:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:50.654 /dev/nbd0 00:11:50.654 12:31:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:50.654 12:31:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:50.654 12:31:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:50.654 12:31:33 -- common/autotest_common.sh@857 -- # local i 00:11:50.654 12:31:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:50.654 12:31:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:50.654 12:31:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:50.654 12:31:33 -- common/autotest_common.sh@861 -- # break 00:11:50.654 12:31:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:50.654 12:31:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:50.654 12:31:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:50.654 1+0 records in 00:11:50.654 1+0 records out 00:11:50.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677987 s, 6.0 MB/s 00:11:50.654 12:31:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:50.654 12:31:33 -- common/autotest_common.sh@874 -- # size=4096 00:11:50.654 12:31:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:50.654 12:31:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:50.654 12:31:33 -- common/autotest_common.sh@877 -- # return 0 00:11:50.654 12:31:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:50.654 12:31:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:50.654 12:31:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:50.913 /dev/nbd1 00:11:50.913 12:31:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:50.913 12:31:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:50.913 12:31:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:50.913 12:31:33 -- common/autotest_common.sh@857 -- # local i 00:11:50.913 12:31:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:50.913 12:31:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:50.913 12:31:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:50.913 12:31:33 -- common/autotest_common.sh@861 -- # break 00:11:50.913 12:31:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:50.913 12:31:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:50.913 12:31:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:50.913 1+0 records in 00:11:50.913 1+0 records out 00:11:50.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501715 s, 8.2 MB/s 00:11:50.913 12:31:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:50.913 12:31:33 -- common/autotest_common.sh@874 -- # size=4096 00:11:50.913 12:31:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:50.913 12:31:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:50.913 12:31:33 -- common/autotest_common.sh@877 -- # return 0 00:11:50.913 12:31:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:50.913 12:31:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:50.913 12:31:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:50.913 12:31:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.913 12:31:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:51.172 { 00:11:51.172 "nbd_device": "/dev/nbd0", 00:11:51.172 "bdev_name": "Malloc0" 00:11:51.172 }, 00:11:51.172 { 00:11:51.172 "nbd_device": "/dev/nbd1", 00:11:51.172 "bdev_name": "Malloc1" 00:11:51.172 } 00:11:51.172 ]' 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:51.172 { 00:11:51.172 "nbd_device": "/dev/nbd0", 00:11:51.172 "bdev_name": "Malloc0" 00:11:51.172 }, 00:11:51.172 { 00:11:51.172 "nbd_device": "/dev/nbd1", 00:11:51.172 "bdev_name": "Malloc1" 00:11:51.172 } 00:11:51.172 ]' 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:51.172 /dev/nbd1' 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:51.172 /dev/nbd1' 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@65 -- # count=2 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@95 -- # count=2 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:51.172 12:31:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:51.172 256+0 records in 00:11:51.172 256+0 records out 00:11:51.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134458 s, 78.0 MB/s 00:11:51.173 12:31:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:51.173 12:31:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:51.432 256+0 records in 00:11:51.432 256+0 records out 00:11:51.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026513 s, 39.5 MB/s 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:51.432 256+0 records in 00:11:51.432 256+0 records out 00:11:51.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294613 s, 35.6 MB/s 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@51 -- # local i 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.432 12:31:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:51.691 12:31:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:51.691 12:31:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:51.691 12:31:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:51.691 12:31:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.691 12:31:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.691 12:31:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:51.691 12:31:33 -- bdev/nbd_common.sh@41 -- # break 00:11:51.691 12:31:33 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.691 12:31:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.691 12:31:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@41 -- # break 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.691 12:31:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@65 -- # true 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@65 -- # count=0 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@104 -- # count=0 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:51.951 12:31:34 -- bdev/nbd_common.sh@109 -- # return 0 00:11:51.951 12:31:34 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:52.518 12:31:34 -- event/event.sh@35 -- # sleep 3 00:11:53.910 [2024-10-01 12:31:36.058172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:53.910 [2024-10-01 12:31:36.230576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.910 [2024-10-01 12:31:36.230581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.910 [2024-10-01 12:31:36.411346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:53.910 [2024-10-01 12:31:36.411472] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:55.340 12:31:37 -- event/event.sh@23 -- # for i in {0..2} 00:11:55.340 spdk_app_start Round 1 00:11:55.340 12:31:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:55.340 12:31:37 -- event/event.sh@25 -- # waitforlisten 105106 /var/tmp/spdk-nbd.sock 00:11:55.340 12:31:37 -- common/autotest_common.sh@819 -- # '[' -z 105106 ']' 00:11:55.340 12:31:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:55.340 12:31:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:55.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:55.340 12:31:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:55.340 12:31:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:55.340 12:31:37 -- common/autotest_common.sh@10 -- # set +x 00:11:55.598 12:31:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:55.598 12:31:38 -- common/autotest_common.sh@852 -- # return 0 00:11:55.598 12:31:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:55.856 Malloc0 00:11:55.856 12:31:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:56.115 Malloc1 00:11:56.115 12:31:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@12 -- # local i 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:56.115 12:31:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:56.374 /dev/nbd0 00:11:56.374 12:31:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:56.374 12:31:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:56.374 12:31:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:56.374 12:31:38 -- common/autotest_common.sh@857 -- # local i 00:11:56.374 12:31:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:56.374 12:31:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:56.374 12:31:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:56.374 12:31:38 -- common/autotest_common.sh@861 -- # break 00:11:56.374 12:31:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:56.374 12:31:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:56.374 12:31:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:56.374 1+0 records in 00:11:56.374 1+0 records out 00:11:56.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039232 s, 10.4 MB/s 00:11:56.374 12:31:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:56.374 12:31:38 -- common/autotest_common.sh@874 -- # size=4096 00:11:56.374 12:31:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:56.374 12:31:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:56.374 12:31:38 -- common/autotest_common.sh@877 -- # return 0 00:11:56.374 12:31:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.374 12:31:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:56.374 12:31:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:56.632 /dev/nbd1 00:11:56.632 12:31:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:56.632 12:31:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:56.632 12:31:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:56.632 12:31:38 -- common/autotest_common.sh@857 -- # local i 00:11:56.632 12:31:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:56.632 12:31:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:56.632 12:31:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:56.632 12:31:38 -- common/autotest_common.sh@861 -- # break 00:11:56.632 12:31:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:56.632 12:31:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:56.632 12:31:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:56.632 1+0 records in 00:11:56.632 1+0 records out 00:11:56.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428835 s, 9.6 MB/s 00:11:56.632 12:31:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:56.632 12:31:38 -- common/autotest_common.sh@874 -- # size=4096 00:11:56.632 12:31:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:56.632 12:31:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:56.632 12:31:38 -- common/autotest_common.sh@877 -- # return 0 00:11:56.632 12:31:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.632 12:31:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:56.632 12:31:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:56.633 12:31:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.633 12:31:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:56.891 { 00:11:56.891 "nbd_device": "/dev/nbd0", 00:11:56.891 "bdev_name": "Malloc0" 00:11:56.891 }, 00:11:56.891 { 00:11:56.891 "nbd_device": "/dev/nbd1", 00:11:56.891 "bdev_name": "Malloc1" 00:11:56.891 } 00:11:56.891 ]' 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:56.891 { 00:11:56.891 "nbd_device": "/dev/nbd0", 00:11:56.891 "bdev_name": "Malloc0" 00:11:56.891 }, 00:11:56.891 { 00:11:56.891 "nbd_device": "/dev/nbd1", 00:11:56.891 "bdev_name": "Malloc1" 00:11:56.891 } 00:11:56.891 ]' 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:56.891 /dev/nbd1' 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:56.891 /dev/nbd1' 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@65 -- # count=2 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@95 -- # count=2 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:56.891 256+0 records in 00:11:56.891 256+0 records out 00:11:56.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540186 s, 194 MB/s 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:56.891 256+0 records in 00:11:56.891 256+0 records out 00:11:56.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257769 s, 40.7 MB/s 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:56.891 256+0 records in 00:11:56.891 256+0 records out 00:11:56.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314774 s, 33.3 MB/s 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@51 -- # local i 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:56.891 12:31:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:57.150 12:31:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:57.150 12:31:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:57.150 12:31:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:57.150 12:31:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:57.150 12:31:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:57.150 12:31:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:57.150 12:31:39 -- bdev/nbd_common.sh@41 -- # break 00:11:57.150 12:31:39 -- bdev/nbd_common.sh@45 -- # return 0 00:11:57.150 12:31:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:57.150 12:31:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@41 -- # break 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@45 -- # return 0 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:57.408 12:31:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:57.667 12:31:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:57.667 12:31:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:57.667 12:31:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:57.667 12:31:39 -- bdev/nbd_common.sh@65 -- # true 00:11:57.667 12:31:39 -- bdev/nbd_common.sh@65 -- # count=0 00:11:57.667 12:31:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:57.667 12:31:39 -- bdev/nbd_common.sh@104 -- # count=0 00:11:57.667 12:31:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:57.667 12:31:39 -- bdev/nbd_common.sh@109 -- # return 0 00:11:57.667 12:31:39 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:57.925 12:31:40 -- event/event.sh@35 -- # sleep 3 00:11:59.302 [2024-10-01 12:31:41.599864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:59.303 [2024-10-01 12:31:41.774201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.303 [2024-10-01 12:31:41.774202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.561 [2024-10-01 12:31:41.955426] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:59.561 [2024-10-01 12:31:41.955506] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:00.936 spdk_app_start Round 2 00:12:00.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:00.937 12:31:43 -- event/event.sh@23 -- # for i in {0..2} 00:12:00.937 12:31:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:00.937 12:31:43 -- event/event.sh@25 -- # waitforlisten 105106 /var/tmp/spdk-nbd.sock 00:12:00.937 12:31:43 -- common/autotest_common.sh@819 -- # '[' -z 105106 ']' 00:12:00.937 12:31:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:00.937 12:31:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:00.937 12:31:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:00.937 12:31:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:00.937 12:31:43 -- common/autotest_common.sh@10 -- # set +x 00:12:01.195 12:31:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:01.195 12:31:43 -- common/autotest_common.sh@852 -- # return 0 00:12:01.195 12:31:43 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:01.455 Malloc0 00:12:01.455 12:31:43 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:01.714 Malloc1 00:12:01.714 12:31:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@12 -- # local i 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:01.714 12:31:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:01.972 /dev/nbd0 00:12:01.972 12:31:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:01.972 12:31:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:01.972 12:31:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:01.972 12:31:44 -- common/autotest_common.sh@857 -- # local i 00:12:01.972 12:31:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:01.972 12:31:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:01.972 12:31:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:01.972 12:31:44 -- common/autotest_common.sh@861 -- # break 00:12:01.972 12:31:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:01.972 12:31:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:01.972 12:31:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:01.972 1+0 records in 00:12:01.972 1+0 records out 00:12:01.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378847 s, 10.8 MB/s 00:12:01.972 12:31:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:01.972 12:31:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:01.972 12:31:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:01.972 12:31:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:01.972 12:31:44 -- common/autotest_common.sh@877 -- # return 0 00:12:01.972 12:31:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:01.972 12:31:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:01.972 12:31:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:01.972 /dev/nbd1 00:12:02.231 12:31:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:02.231 12:31:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:02.231 12:31:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:02.231 12:31:44 -- common/autotest_common.sh@857 -- # local i 00:12:02.231 12:31:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:02.231 12:31:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:02.231 12:31:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:02.231 12:31:44 -- common/autotest_common.sh@861 -- # break 00:12:02.231 12:31:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:02.231 12:31:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:02.231 12:31:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:02.231 1+0 records in 00:12:02.231 1+0 records out 00:12:02.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000970688 s, 4.2 MB/s 00:12:02.231 12:31:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:02.231 12:31:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:02.231 12:31:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:02.231 12:31:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:02.231 12:31:44 -- common/autotest_common.sh@877 -- # return 0 00:12:02.231 12:31:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:02.231 12:31:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:02.231 12:31:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:02.231 12:31:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.231 12:31:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:02.231 12:31:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:02.231 { 00:12:02.231 "nbd_device": "/dev/nbd0", 00:12:02.231 "bdev_name": "Malloc0" 00:12:02.231 }, 00:12:02.231 { 00:12:02.231 "nbd_device": "/dev/nbd1", 00:12:02.231 "bdev_name": "Malloc1" 00:12:02.231 } 00:12:02.231 ]' 00:12:02.231 12:31:44 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:02.231 { 00:12:02.231 "nbd_device": "/dev/nbd0", 00:12:02.231 "bdev_name": "Malloc0" 00:12:02.231 }, 00:12:02.231 { 00:12:02.231 "nbd_device": "/dev/nbd1", 00:12:02.231 "bdev_name": "Malloc1" 00:12:02.231 } 00:12:02.231 ]' 00:12:02.231 12:31:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:02.490 /dev/nbd1' 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:02.490 /dev/nbd1' 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@65 -- # count=2 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@66 -- # echo 2 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@95 -- # count=2 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:02.490 256+0 records in 00:12:02.490 256+0 records out 00:12:02.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138394 s, 75.8 MB/s 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:02.490 256+0 records in 00:12:02.490 256+0 records out 00:12:02.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258387 s, 40.6 MB/s 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:02.490 256+0 records in 00:12:02.490 256+0 records out 00:12:02.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030375 s, 34.5 MB/s 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@51 -- # local i 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.490 12:31:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:02.749 12:31:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:02.749 12:31:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:02.749 12:31:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:02.749 12:31:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.749 12:31:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.749 12:31:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:02.749 12:31:45 -- bdev/nbd_common.sh@41 -- # break 00:12:02.749 12:31:45 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.749 12:31:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.749 12:31:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@41 -- # break 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:03.008 12:31:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:03.267 12:31:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:03.267 12:31:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:03.267 12:31:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:03.267 12:31:45 -- bdev/nbd_common.sh@65 -- # true 00:12:03.267 12:31:45 -- bdev/nbd_common.sh@65 -- # count=0 00:12:03.267 12:31:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:03.267 12:31:45 -- bdev/nbd_common.sh@104 -- # count=0 00:12:03.267 12:31:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:03.267 12:31:45 -- bdev/nbd_common.sh@109 -- # return 0 00:12:03.267 12:31:45 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:03.564 12:31:45 -- event/event.sh@35 -- # sleep 3 00:12:04.948 [2024-10-01 12:31:47.190967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:04.948 [2024-10-01 12:31:47.369014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.948 [2024-10-01 12:31:47.369014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.207 [2024-10-01 12:31:47.550530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:05.207 [2024-10-01 12:31:47.550626] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:06.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:06.586 12:31:48 -- event/event.sh@38 -- # waitforlisten 105106 /var/tmp/spdk-nbd.sock 00:12:06.587 12:31:48 -- common/autotest_common.sh@819 -- # '[' -z 105106 ']' 00:12:06.587 12:31:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:06.587 12:31:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:06.587 12:31:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:06.587 12:31:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:06.587 12:31:48 -- common/autotest_common.sh@10 -- # set +x 00:12:06.846 12:31:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:06.846 12:31:49 -- common/autotest_common.sh@852 -- # return 0 00:12:06.846 12:31:49 -- event/event.sh@39 -- # killprocess 105106 00:12:06.846 12:31:49 -- common/autotest_common.sh@926 -- # '[' -z 105106 ']' 00:12:06.846 12:31:49 -- common/autotest_common.sh@930 -- # kill -0 105106 00:12:06.846 12:31:49 -- common/autotest_common.sh@931 -- # uname 00:12:06.846 12:31:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:06.846 12:31:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105106 00:12:06.846 12:31:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:06.846 12:31:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:06.846 12:31:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105106' 00:12:06.846 killing process with pid 105106 00:12:06.846 12:31:49 -- common/autotest_common.sh@945 -- # kill 105106 00:12:06.846 12:31:49 -- common/autotest_common.sh@950 -- # wait 105106 00:12:07.783 spdk_app_start is called in Round 0. 00:12:07.783 Shutdown signal received, stop current app iteration 00:12:07.783 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:12:07.783 spdk_app_start is called in Round 1. 00:12:07.783 Shutdown signal received, stop current app iteration 00:12:07.783 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:12:07.783 spdk_app_start is called in Round 2. 00:12:07.783 Shutdown signal received, stop current app iteration 00:12:07.783 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:12:07.783 spdk_app_start is called in Round 3. 00:12:07.783 Shutdown signal received, stop current app iteration 00:12:08.042 ************************************ 00:12:08.042 END TEST app_repeat 00:12:08.042 ************************************ 00:12:08.042 12:31:50 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:08.042 12:31:50 -- event/event.sh@42 -- # return 0 00:12:08.042 00:12:08.042 real 0m18.798s 00:12:08.042 user 0m38.775s 00:12:08.042 sys 0m2.924s 00:12:08.042 12:31:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.042 12:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:08.042 12:31:50 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:08.042 12:31:50 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:08.042 12:31:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:08.042 12:31:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:08.042 12:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:08.042 ************************************ 00:12:08.042 START TEST cpu_locks 00:12:08.042 ************************************ 00:12:08.042 12:31:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:08.042 * Looking for test storage... 00:12:08.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:08.042 12:31:50 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:08.042 12:31:50 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:08.042 12:31:50 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:08.042 12:31:50 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:08.042 12:31:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:08.042 12:31:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:08.042 12:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:08.042 ************************************ 00:12:08.042 START TEST default_locks 00:12:08.042 ************************************ 00:12:08.042 12:31:50 -- common/autotest_common.sh@1104 -- # default_locks 00:12:08.042 12:31:50 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=105603 00:12:08.042 12:31:50 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:08.042 12:31:50 -- event/cpu_locks.sh@47 -- # waitforlisten 105603 00:12:08.042 12:31:50 -- common/autotest_common.sh@819 -- # '[' -z 105603 ']' 00:12:08.043 12:31:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.043 12:31:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:08.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.043 12:31:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.043 12:31:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:08.043 12:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:08.302 [2024-10-01 12:31:50.624697] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:08.302 [2024-10-01 12:31:50.625227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105603 ] 00:12:08.302 [2024-10-01 12:31:50.790968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.562 [2024-10-01 12:31:50.985192] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:08.562 [2024-10-01 12:31:50.985375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.941 12:31:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:09.941 12:31:52 -- common/autotest_common.sh@852 -- # return 0 00:12:09.941 12:31:52 -- event/cpu_locks.sh@49 -- # locks_exist 105603 00:12:09.941 12:31:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:09.941 12:31:52 -- event/cpu_locks.sh@22 -- # lslocks -p 105603 00:12:09.941 12:31:52 -- event/cpu_locks.sh@50 -- # killprocess 105603 00:12:09.941 12:31:52 -- common/autotest_common.sh@926 -- # '[' -z 105603 ']' 00:12:09.941 12:31:52 -- common/autotest_common.sh@930 -- # kill -0 105603 00:12:09.941 12:31:52 -- common/autotest_common.sh@931 -- # uname 00:12:09.941 12:31:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:09.941 12:31:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105603 00:12:09.941 12:31:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:09.941 killing process with pid 105603 00:12:09.941 12:31:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:09.941 12:31:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105603' 00:12:09.941 12:31:52 -- common/autotest_common.sh@945 -- # kill 105603 00:12:09.941 12:31:52 -- common/autotest_common.sh@950 -- # wait 105603 00:12:12.478 12:31:54 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 105603 00:12:12.478 12:31:54 -- common/autotest_common.sh@640 -- # local es=0 00:12:12.478 12:31:54 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 105603 00:12:12.478 12:31:54 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:12:12.478 12:31:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:12.478 12:31:54 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:12:12.478 12:31:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:12.478 12:31:54 -- common/autotest_common.sh@643 -- # waitforlisten 105603 00:12:12.478 12:31:54 -- common/autotest_common.sh@819 -- # '[' -z 105603 ']' 00:12:12.478 12:31:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.478 12:31:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:12.478 12:31:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.478 12:31:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:12.478 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:12.478 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (105603) - No such process 00:12:12.478 ERROR: process (pid: 105603) is no longer running 00:12:12.478 12:31:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:12.478 12:31:54 -- common/autotest_common.sh@852 -- # return 1 00:12:12.478 12:31:54 -- common/autotest_common.sh@643 -- # es=1 00:12:12.478 12:31:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:12.478 12:31:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:12.478 12:31:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:12.478 12:31:54 -- event/cpu_locks.sh@54 -- # no_locks 00:12:12.478 12:31:54 -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:12.478 12:31:54 -- event/cpu_locks.sh@26 -- # local lock_files 00:12:12.478 12:31:54 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:12.478 00:12:12.478 real 0m4.222s 00:12:12.478 user 0m4.334s 00:12:12.478 sys 0m0.594s 00:12:12.478 12:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.478 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:12.478 ************************************ 00:12:12.478 END TEST default_locks 00:12:12.478 ************************************ 00:12:12.478 12:31:54 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:12.478 12:31:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:12.478 12:31:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:12.478 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:12.478 ************************************ 00:12:12.478 START TEST default_locks_via_rpc 00:12:12.478 ************************************ 00:12:12.478 12:31:54 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:12:12.478 12:31:54 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:12.478 12:31:54 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=105692 00:12:12.478 12:31:54 -- event/cpu_locks.sh@63 -- # waitforlisten 105692 00:12:12.478 12:31:54 -- common/autotest_common.sh@819 -- # '[' -z 105692 ']' 00:12:12.478 12:31:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.478 12:31:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:12.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.478 12:31:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.478 12:31:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:12.478 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:12.478 [2024-10-01 12:31:54.909853] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:12.478 [2024-10-01 12:31:54.910007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105692 ] 00:12:12.740 [2024-10-01 12:31:55.074555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.740 [2024-10-01 12:31:55.273609] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:12.740 [2024-10-01 12:31:55.273794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.119 12:31:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:14.119 12:31:56 -- common/autotest_common.sh@852 -- # return 0 00:12:14.119 12:31:56 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:14.119 12:31:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.119 12:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:14.119 12:31:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.120 12:31:56 -- event/cpu_locks.sh@67 -- # no_locks 00:12:14.120 12:31:56 -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:14.120 12:31:56 -- event/cpu_locks.sh@26 -- # local lock_files 00:12:14.120 12:31:56 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:14.120 12:31:56 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:14.120 12:31:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.120 12:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:14.120 12:31:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.120 12:31:56 -- event/cpu_locks.sh@71 -- # locks_exist 105692 00:12:14.120 12:31:56 -- event/cpu_locks.sh@22 -- # lslocks -p 105692 00:12:14.120 12:31:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:14.379 12:31:56 -- event/cpu_locks.sh@73 -- # killprocess 105692 00:12:14.379 12:31:56 -- common/autotest_common.sh@926 -- # '[' -z 105692 ']' 00:12:14.379 12:31:56 -- common/autotest_common.sh@930 -- # kill -0 105692 00:12:14.379 12:31:56 -- common/autotest_common.sh@931 -- # uname 00:12:14.379 12:31:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:14.379 12:31:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105692 00:12:14.379 12:31:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:14.379 killing process with pid 105692 00:12:14.379 12:31:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:14.379 12:31:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105692' 00:12:14.379 12:31:56 -- common/autotest_common.sh@945 -- # kill 105692 00:12:14.379 12:31:56 -- common/autotest_common.sh@950 -- # wait 105692 00:12:16.917 00:12:16.917 real 0m4.275s 00:12:16.917 user 0m4.403s 00:12:16.917 sys 0m0.643s 00:12:16.917 12:31:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.917 ************************************ 00:12:16.917 END TEST default_locks_via_rpc 00:12:16.917 ************************************ 00:12:16.917 12:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:16.917 12:31:59 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:16.917 12:31:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:16.917 12:31:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:16.917 12:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:16.917 ************************************ 00:12:16.917 START TEST non_locking_app_on_locked_coremask 00:12:16.917 ************************************ 00:12:16.917 12:31:59 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:12:16.917 12:31:59 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=105782 00:12:16.917 12:31:59 -- event/cpu_locks.sh@81 -- # waitforlisten 105782 /var/tmp/spdk.sock 00:12:16.917 12:31:59 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:16.917 12:31:59 -- common/autotest_common.sh@819 -- # '[' -z 105782 ']' 00:12:16.917 12:31:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.917 12:31:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:16.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.917 12:31:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.917 12:31:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:16.917 12:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:16.917 [2024-10-01 12:31:59.268899] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:16.917 [2024-10-01 12:31:59.269033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105782 ] 00:12:16.917 [2024-10-01 12:31:59.434448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.177 [2024-10-01 12:31:59.633445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:17.177 [2024-10-01 12:31:59.633631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.556 12:32:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:18.556 12:32:00 -- common/autotest_common.sh@852 -- # return 0 00:12:18.556 12:32:00 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=105813 00:12:18.556 12:32:00 -- event/cpu_locks.sh@85 -- # waitforlisten 105813 /var/tmp/spdk2.sock 00:12:18.556 12:32:00 -- common/autotest_common.sh@819 -- # '[' -z 105813 ']' 00:12:18.556 12:32:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:18.556 12:32:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:18.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:18.556 12:32:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:18.556 12:32:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:18.556 12:32:00 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:18.556 12:32:00 -- common/autotest_common.sh@10 -- # set +x 00:12:18.556 [2024-10-01 12:32:00.823673] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:18.556 [2024-10-01 12:32:00.823983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105813 ] 00:12:18.556 [2024-10-01 12:32:00.975122] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:18.556 [2024-10-01 12:32:00.975181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.123 [2024-10-01 12:32:01.358767] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:19.123 [2024-10-01 12:32:01.358973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.501 12:32:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:20.501 12:32:02 -- common/autotest_common.sh@852 -- # return 0 00:12:20.501 12:32:02 -- event/cpu_locks.sh@87 -- # locks_exist 105782 00:12:20.501 12:32:02 -- event/cpu_locks.sh@22 -- # lslocks -p 105782 00:12:20.501 12:32:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:21.070 12:32:03 -- event/cpu_locks.sh@89 -- # killprocess 105782 00:12:21.070 12:32:03 -- common/autotest_common.sh@926 -- # '[' -z 105782 ']' 00:12:21.071 12:32:03 -- common/autotest_common.sh@930 -- # kill -0 105782 00:12:21.071 12:32:03 -- common/autotest_common.sh@931 -- # uname 00:12:21.071 12:32:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:21.071 12:32:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105782 00:12:21.071 12:32:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:21.071 12:32:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:21.071 killing process with pid 105782 00:12:21.071 12:32:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105782' 00:12:21.071 12:32:03 -- common/autotest_common.sh@945 -- # kill 105782 00:12:21.071 12:32:03 -- common/autotest_common.sh@950 -- # wait 105782 00:12:26.373 12:32:08 -- event/cpu_locks.sh@90 -- # killprocess 105813 00:12:26.373 12:32:08 -- common/autotest_common.sh@926 -- # '[' -z 105813 ']' 00:12:26.373 12:32:08 -- common/autotest_common.sh@930 -- # kill -0 105813 00:12:26.373 12:32:08 -- common/autotest_common.sh@931 -- # uname 00:12:26.373 12:32:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:26.373 12:32:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105813 00:12:26.373 12:32:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:26.373 killing process with pid 105813 00:12:26.373 12:32:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:26.373 12:32:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105813' 00:12:26.373 12:32:08 -- common/autotest_common.sh@945 -- # kill 105813 00:12:26.373 12:32:08 -- common/autotest_common.sh@950 -- # wait 105813 00:12:28.290 00:12:28.290 real 0m11.258s 00:12:28.290 user 0m11.769s 00:12:28.290 sys 0m1.240s 00:12:28.290 12:32:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.290 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.290 ************************************ 00:12:28.290 END TEST non_locking_app_on_locked_coremask 00:12:28.290 ************************************ 00:12:28.290 12:32:10 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:28.290 12:32:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:28.290 12:32:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:28.290 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.290 ************************************ 00:12:28.290 START TEST locking_app_on_unlocked_coremask 00:12:28.290 ************************************ 00:12:28.290 12:32:10 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:12:28.290 12:32:10 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=105974 00:12:28.290 12:32:10 -- event/cpu_locks.sh@99 -- # waitforlisten 105974 /var/tmp/spdk.sock 00:12:28.290 12:32:10 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:28.290 12:32:10 -- common/autotest_common.sh@819 -- # '[' -z 105974 ']' 00:12:28.290 12:32:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.290 12:32:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:28.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.290 12:32:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.290 12:32:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:28.290 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.290 [2024-10-01 12:32:10.598911] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:28.290 [2024-10-01 12:32:10.599476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105974 ] 00:12:28.290 [2024-10-01 12:32:10.765437] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:28.290 [2024-10-01 12:32:10.765510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.549 [2024-10-01 12:32:10.963472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:28.549 [2024-10-01 12:32:10.963670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.945 12:32:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:29.945 12:32:12 -- common/autotest_common.sh@852 -- # return 0 00:12:29.945 12:32:12 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=105996 00:12:29.945 12:32:12 -- event/cpu_locks.sh@103 -- # waitforlisten 105996 /var/tmp/spdk2.sock 00:12:29.945 12:32:12 -- common/autotest_common.sh@819 -- # '[' -z 105996 ']' 00:12:29.945 12:32:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:29.945 12:32:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:29.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:29.945 12:32:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:29.945 12:32:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:29.945 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:29.945 12:32:12 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:29.945 [2024-10-01 12:32:12.163743] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:29.945 [2024-10-01 12:32:12.163906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105996 ] 00:12:29.945 [2024-10-01 12:32:12.319266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.203 [2024-10-01 12:32:12.733962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:30.203 [2024-10-01 12:32:12.734149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.735 12:32:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:32.735 12:32:14 -- common/autotest_common.sh@852 -- # return 0 00:12:32.735 12:32:14 -- event/cpu_locks.sh@105 -- # locks_exist 105996 00:12:32.735 12:32:14 -- event/cpu_locks.sh@22 -- # lslocks -p 105996 00:12:32.735 12:32:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:32.994 12:32:15 -- event/cpu_locks.sh@107 -- # killprocess 105974 00:12:32.994 12:32:15 -- common/autotest_common.sh@926 -- # '[' -z 105974 ']' 00:12:32.994 12:32:15 -- common/autotest_common.sh@930 -- # kill -0 105974 00:12:32.994 12:32:15 -- common/autotest_common.sh@931 -- # uname 00:12:32.994 12:32:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:32.994 12:32:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105974 00:12:32.994 12:32:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:32.994 killing process with pid 105974 00:12:32.994 12:32:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:32.994 12:32:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105974' 00:12:32.994 12:32:15 -- common/autotest_common.sh@945 -- # kill 105974 00:12:32.994 12:32:15 -- common/autotest_common.sh@950 -- # wait 105974 00:12:38.266 12:32:20 -- event/cpu_locks.sh@108 -- # killprocess 105996 00:12:38.266 12:32:20 -- common/autotest_common.sh@926 -- # '[' -z 105996 ']' 00:12:38.266 12:32:20 -- common/autotest_common.sh@930 -- # kill -0 105996 00:12:38.266 12:32:20 -- common/autotest_common.sh@931 -- # uname 00:12:38.266 12:32:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:38.266 12:32:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105996 00:12:38.266 12:32:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:38.266 12:32:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:38.266 12:32:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105996' 00:12:38.266 killing process with pid 105996 00:12:38.266 12:32:20 -- common/autotest_common.sh@945 -- # kill 105996 00:12:38.266 12:32:20 -- common/autotest_common.sh@950 -- # wait 105996 00:12:40.171 00:12:40.171 real 0m11.892s 00:12:40.171 user 0m12.600s 00:12:40.171 sys 0m1.249s 00:12:40.171 12:32:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.171 12:32:22 -- common/autotest_common.sh@10 -- # set +x 00:12:40.171 ************************************ 00:12:40.171 END TEST locking_app_on_unlocked_coremask 00:12:40.171 ************************************ 00:12:40.171 12:32:22 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:40.172 12:32:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:40.172 12:32:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:40.172 12:32:22 -- common/autotest_common.sh@10 -- # set +x 00:12:40.172 ************************************ 00:12:40.172 START TEST locking_app_on_locked_coremask 00:12:40.172 ************************************ 00:12:40.172 12:32:22 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:12:40.172 12:32:22 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=106164 00:12:40.172 12:32:22 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:40.172 12:32:22 -- event/cpu_locks.sh@116 -- # waitforlisten 106164 /var/tmp/spdk.sock 00:12:40.172 12:32:22 -- common/autotest_common.sh@819 -- # '[' -z 106164 ']' 00:12:40.172 12:32:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.172 12:32:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:40.172 12:32:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.172 12:32:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:40.172 12:32:22 -- common/autotest_common.sh@10 -- # set +x 00:12:40.172 [2024-10-01 12:32:22.573336] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:40.172 [2024-10-01 12:32:22.573495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106164 ] 00:12:40.431 [2024-10-01 12:32:22.736261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.431 [2024-10-01 12:32:22.924735] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:40.431 [2024-10-01 12:32:22.924942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.815 12:32:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:41.815 12:32:24 -- common/autotest_common.sh@852 -- # return 0 00:12:41.815 12:32:24 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=106194 00:12:41.815 12:32:24 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:41.815 12:32:24 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 106194 /var/tmp/spdk2.sock 00:12:41.815 12:32:24 -- common/autotest_common.sh@640 -- # local es=0 00:12:41.815 12:32:24 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 106194 /var/tmp/spdk2.sock 00:12:41.815 12:32:24 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:12:41.815 12:32:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:41.815 12:32:24 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:12:41.815 12:32:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:41.815 12:32:24 -- common/autotest_common.sh@643 -- # waitforlisten 106194 /var/tmp/spdk2.sock 00:12:41.815 12:32:24 -- common/autotest_common.sh@819 -- # '[' -z 106194 ']' 00:12:41.815 12:32:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:41.815 12:32:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:41.815 12:32:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:41.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:41.815 12:32:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:41.815 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:12:41.815 [2024-10-01 12:32:24.133792] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:41.815 [2024-10-01 12:32:24.133942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106194 ] 00:12:41.815 [2024-10-01 12:32:24.283438] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 106164 has claimed it. 00:12:41.815 [2024-10-01 12:32:24.283519] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:42.383 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (106194) - No such process 00:12:42.383 ERROR: process (pid: 106194) is no longer running 00:12:42.383 12:32:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:42.383 12:32:24 -- common/autotest_common.sh@852 -- # return 1 00:12:42.383 12:32:24 -- common/autotest_common.sh@643 -- # es=1 00:12:42.383 12:32:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:42.383 12:32:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:42.383 12:32:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:42.383 12:32:24 -- event/cpu_locks.sh@122 -- # locks_exist 106164 00:12:42.383 12:32:24 -- event/cpu_locks.sh@22 -- # lslocks -p 106164 00:12:42.383 12:32:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:42.642 12:32:24 -- event/cpu_locks.sh@124 -- # killprocess 106164 00:12:42.642 12:32:24 -- common/autotest_common.sh@926 -- # '[' -z 106164 ']' 00:12:42.642 12:32:24 -- common/autotest_common.sh@930 -- # kill -0 106164 00:12:42.642 12:32:24 -- common/autotest_common.sh@931 -- # uname 00:12:42.642 12:32:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:42.642 12:32:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106164 00:12:42.642 12:32:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:42.642 12:32:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:42.642 12:32:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106164' 00:12:42.642 killing process with pid 106164 00:12:42.642 12:32:25 -- common/autotest_common.sh@945 -- # kill 106164 00:12:42.642 12:32:25 -- common/autotest_common.sh@950 -- # wait 106164 00:12:45.182 ************************************ 00:12:45.182 END TEST locking_app_on_locked_coremask 00:12:45.182 ************************************ 00:12:45.182 00:12:45.182 real 0m4.781s 00:12:45.182 user 0m5.043s 00:12:45.182 sys 0m0.721s 00:12:45.182 12:32:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.182 12:32:27 -- common/autotest_common.sh@10 -- # set +x 00:12:45.182 12:32:27 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:45.182 12:32:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:45.182 12:32:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.182 12:32:27 -- common/autotest_common.sh@10 -- # set +x 00:12:45.182 ************************************ 00:12:45.182 START TEST locking_overlapped_coremask 00:12:45.182 ************************************ 00:12:45.182 12:32:27 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:12:45.182 12:32:27 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=106263 00:12:45.182 12:32:27 -- event/cpu_locks.sh@133 -- # waitforlisten 106263 /var/tmp/spdk.sock 00:12:45.182 12:32:27 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:45.182 12:32:27 -- common/autotest_common.sh@819 -- # '[' -z 106263 ']' 00:12:45.182 12:32:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.182 12:32:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:45.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.182 12:32:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.182 12:32:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:45.182 12:32:27 -- common/autotest_common.sh@10 -- # set +x 00:12:45.182 [2024-10-01 12:32:27.422063] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:45.182 [2024-10-01 12:32:27.422205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106263 ] 00:12:45.182 [2024-10-01 12:32:27.594984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:45.455 [2024-10-01 12:32:27.797636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:45.455 [2024-10-01 12:32:27.798060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.455 [2024-10-01 12:32:27.798207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.455 [2024-10-01 12:32:27.798214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.387 12:32:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:46.387 12:32:28 -- common/autotest_common.sh@852 -- # return 0 00:12:46.387 12:32:28 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=106288 00:12:46.387 12:32:28 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 106288 /var/tmp/spdk2.sock 00:12:46.387 12:32:28 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:46.387 12:32:28 -- common/autotest_common.sh@640 -- # local es=0 00:12:46.388 12:32:28 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 106288 /var/tmp/spdk2.sock 00:12:46.388 12:32:28 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:12:46.388 12:32:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:46.388 12:32:28 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:12:46.650 12:32:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:46.650 12:32:28 -- common/autotest_common.sh@643 -- # waitforlisten 106288 /var/tmp/spdk2.sock 00:12:46.650 12:32:28 -- common/autotest_common.sh@819 -- # '[' -z 106288 ']' 00:12:46.650 12:32:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:46.650 12:32:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:46.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:46.650 12:32:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:46.650 12:32:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:46.650 12:32:28 -- common/autotest_common.sh@10 -- # set +x 00:12:46.650 [2024-10-01 12:32:28.982036] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:46.650 [2024-10-01 12:32:28.982167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106288 ] 00:12:46.650 [2024-10-01 12:32:29.155786] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 106263 has claimed it. 00:12:46.650 [2024-10-01 12:32:29.155857] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:47.215 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (106288) - No such process 00:12:47.215 ERROR: process (pid: 106288) is no longer running 00:12:47.215 12:32:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:47.215 12:32:29 -- common/autotest_common.sh@852 -- # return 1 00:12:47.215 12:32:29 -- common/autotest_common.sh@643 -- # es=1 00:12:47.215 12:32:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:47.215 12:32:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:47.215 12:32:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:47.215 12:32:29 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:47.215 12:32:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:47.215 12:32:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:47.215 12:32:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:47.215 12:32:29 -- event/cpu_locks.sh@141 -- # killprocess 106263 00:12:47.215 12:32:29 -- common/autotest_common.sh@926 -- # '[' -z 106263 ']' 00:12:47.215 12:32:29 -- common/autotest_common.sh@930 -- # kill -0 106263 00:12:47.215 12:32:29 -- common/autotest_common.sh@931 -- # uname 00:12:47.215 12:32:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:47.215 12:32:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106263 00:12:47.215 12:32:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:47.215 12:32:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:47.215 killing process with pid 106263 00:12:47.215 12:32:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106263' 00:12:47.215 12:32:29 -- common/autotest_common.sh@945 -- # kill 106263 00:12:47.215 12:32:29 -- common/autotest_common.sh@950 -- # wait 106263 00:12:49.744 00:12:49.744 real 0m4.681s 00:12:49.744 user 0m12.442s 00:12:49.744 sys 0m0.615s 00:12:49.744 12:32:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.744 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:49.744 ************************************ 00:12:49.744 END TEST locking_overlapped_coremask 00:12:49.744 ************************************ 00:12:49.744 12:32:32 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:49.744 12:32:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:49.744 12:32:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:49.745 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:49.745 ************************************ 00:12:49.745 START TEST locking_overlapped_coremask_via_rpc 00:12:49.745 ************************************ 00:12:49.745 12:32:32 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:12:49.745 12:32:32 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=106364 00:12:49.745 12:32:32 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:49.745 12:32:32 -- event/cpu_locks.sh@149 -- # waitforlisten 106364 /var/tmp/spdk.sock 00:12:49.745 12:32:32 -- common/autotest_common.sh@819 -- # '[' -z 106364 ']' 00:12:49.745 12:32:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.745 12:32:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:49.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.745 12:32:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.745 12:32:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:49.745 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:49.745 [2024-10-01 12:32:32.182572] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:49.745 [2024-10-01 12:32:32.182706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106364 ] 00:12:50.004 [2024-10-01 12:32:32.355724] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:50.004 [2024-10-01 12:32:32.355796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.262 [2024-10-01 12:32:32.560286] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:50.262 [2024-10-01 12:32:32.560680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.262 [2024-10-01 12:32:32.560903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.262 [2024-10-01 12:32:32.560907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.193 12:32:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:51.193 12:32:33 -- common/autotest_common.sh@852 -- # return 0 00:12:51.193 12:32:33 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=106396 00:12:51.193 12:32:33 -- event/cpu_locks.sh@153 -- # waitforlisten 106396 /var/tmp/spdk2.sock 00:12:51.193 12:32:33 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:51.193 12:32:33 -- common/autotest_common.sh@819 -- # '[' -z 106396 ']' 00:12:51.193 12:32:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:51.193 12:32:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:51.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:51.193 12:32:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:51.193 12:32:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:51.193 12:32:33 -- common/autotest_common.sh@10 -- # set +x 00:12:51.451 [2024-10-01 12:32:33.752817] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:51.451 [2024-10-01 12:32:33.752997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106396 ] 00:12:51.451 [2024-10-01 12:32:33.928552] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:51.451 [2024-10-01 12:32:33.928616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.017 [2024-10-01 12:32:34.360981] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:52.017 [2024-10-01 12:32:34.361550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.017 [2024-10-01 12:32:34.361377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.017 [2024-10-01 12:32:34.361561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:54.547 12:32:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:54.547 12:32:36 -- common/autotest_common.sh@852 -- # return 0 00:12:54.547 12:32:36 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:54.547 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.547 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.547 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.547 12:32:36 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:54.547 12:32:36 -- common/autotest_common.sh@640 -- # local es=0 00:12:54.547 12:32:36 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:54.547 12:32:36 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:12:54.547 12:32:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:54.547 12:32:36 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:12:54.547 12:32:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:54.547 12:32:36 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:54.547 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.547 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.547 [2024-10-01 12:32:36.692036] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 106364 has claimed it. 00:12:54.547 request: 00:12:54.547 { 00:12:54.547 "method": "framework_enable_cpumask_locks", 00:12:54.547 "req_id": 1 00:12:54.547 } 00:12:54.547 Got JSON-RPC error response 00:12:54.547 response: 00:12:54.547 { 00:12:54.547 "code": -32603, 00:12:54.547 "message": "Failed to claim CPU core: 2" 00:12:54.547 } 00:12:54.547 12:32:36 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:54.547 12:32:36 -- common/autotest_common.sh@643 -- # es=1 00:12:54.547 12:32:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:54.547 12:32:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:54.547 12:32:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:54.547 12:32:36 -- event/cpu_locks.sh@158 -- # waitforlisten 106364 /var/tmp/spdk.sock 00:12:54.547 12:32:36 -- common/autotest_common.sh@819 -- # '[' -z 106364 ']' 00:12:54.547 12:32:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.547 12:32:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:54.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.547 12:32:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.547 12:32:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:54.547 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.547 12:32:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:54.547 12:32:36 -- common/autotest_common.sh@852 -- # return 0 00:12:54.547 12:32:36 -- event/cpu_locks.sh@159 -- # waitforlisten 106396 /var/tmp/spdk2.sock 00:12:54.547 12:32:36 -- common/autotest_common.sh@819 -- # '[' -z 106396 ']' 00:12:54.547 12:32:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:54.547 12:32:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:54.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:54.547 12:32:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:54.547 12:32:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:54.547 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.547 12:32:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:54.547 12:32:37 -- common/autotest_common.sh@852 -- # return 0 00:12:54.547 12:32:37 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:54.547 12:32:37 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:54.548 12:32:37 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:54.806 12:32:37 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:54.806 00:12:54.806 real 0m4.968s 00:12:54.806 user 0m1.636s 00:12:54.806 sys 0m0.279s 00:12:54.806 12:32:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.806 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:54.806 ************************************ 00:12:54.806 END TEST locking_overlapped_coremask_via_rpc 00:12:54.806 ************************************ 00:12:54.806 12:32:37 -- event/cpu_locks.sh@174 -- # cleanup 00:12:54.806 12:32:37 -- event/cpu_locks.sh@15 -- # [[ -z 106364 ]] 00:12:54.806 12:32:37 -- event/cpu_locks.sh@15 -- # killprocess 106364 00:12:54.806 12:32:37 -- common/autotest_common.sh@926 -- # '[' -z 106364 ']' 00:12:54.806 12:32:37 -- common/autotest_common.sh@930 -- # kill -0 106364 00:12:54.806 12:32:37 -- common/autotest_common.sh@931 -- # uname 00:12:54.806 12:32:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:54.806 12:32:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106364 00:12:54.806 12:32:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:54.806 12:32:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:54.806 killing process with pid 106364 00:12:54.806 12:32:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106364' 00:12:54.806 12:32:37 -- common/autotest_common.sh@945 -- # kill 106364 00:12:54.806 12:32:37 -- common/autotest_common.sh@950 -- # wait 106364 00:12:57.341 12:32:39 -- event/cpu_locks.sh@16 -- # [[ -z 106396 ]] 00:12:57.341 12:32:39 -- event/cpu_locks.sh@16 -- # killprocess 106396 00:12:57.341 12:32:39 -- common/autotest_common.sh@926 -- # '[' -z 106396 ']' 00:12:57.341 12:32:39 -- common/autotest_common.sh@930 -- # kill -0 106396 00:12:57.341 12:32:39 -- common/autotest_common.sh@931 -- # uname 00:12:57.341 12:32:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:57.341 12:32:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106396 00:12:57.341 12:32:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:57.341 12:32:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:57.341 killing process with pid 106396 00:12:57.342 12:32:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106396' 00:12:57.342 12:32:39 -- common/autotest_common.sh@945 -- # kill 106396 00:12:57.342 12:32:39 -- common/autotest_common.sh@950 -- # wait 106396 00:12:59.877 12:32:41 -- event/cpu_locks.sh@18 -- # rm -f 00:12:59.877 12:32:41 -- event/cpu_locks.sh@1 -- # cleanup 00:12:59.877 12:32:41 -- event/cpu_locks.sh@15 -- # [[ -z 106364 ]] 00:12:59.877 12:32:41 -- event/cpu_locks.sh@15 -- # killprocess 106364 00:12:59.877 12:32:41 -- common/autotest_common.sh@926 -- # '[' -z 106364 ']' 00:12:59.877 12:32:41 -- common/autotest_common.sh@930 -- # kill -0 106364 00:12:59.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (106364) - No such process 00:12:59.877 Process with pid 106364 is not found 00:12:59.877 12:32:41 -- common/autotest_common.sh@953 -- # echo 'Process with pid 106364 is not found' 00:12:59.877 12:32:41 -- event/cpu_locks.sh@16 -- # [[ -z 106396 ]] 00:12:59.877 12:32:41 -- event/cpu_locks.sh@16 -- # killprocess 106396 00:12:59.877 12:32:41 -- common/autotest_common.sh@926 -- # '[' -z 106396 ']' 00:12:59.877 12:32:41 -- common/autotest_common.sh@930 -- # kill -0 106396 00:12:59.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (106396) - No such process 00:12:59.877 Process with pid 106396 is not found 00:12:59.877 12:32:41 -- common/autotest_common.sh@953 -- # echo 'Process with pid 106396 is not found' 00:12:59.877 12:32:41 -- event/cpu_locks.sh@18 -- # rm -f 00:12:59.877 ************************************ 00:12:59.877 END TEST cpu_locks 00:12:59.877 ************************************ 00:12:59.877 00:12:59.877 real 0m51.536s 00:12:59.877 user 1m29.427s 00:12:59.877 sys 0m6.460s 00:12:59.877 12:32:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.877 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.877 00:12:59.877 real 1m21.995s 00:12:59.877 user 2m25.537s 00:12:59.877 sys 0m10.524s 00:12:59.877 12:32:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.877 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:12:59.877 ************************************ 00:12:59.877 END TEST event 00:12:59.877 ************************************ 00:12:59.877 12:32:42 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:59.877 12:32:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:59.877 12:32:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:59.877 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:12:59.877 ************************************ 00:12:59.878 START TEST thread 00:12:59.878 ************************************ 00:12:59.878 12:32:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:59.878 * Looking for test storage... 00:12:59.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:59.878 12:32:42 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:59.878 12:32:42 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:12:59.878 12:32:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:59.878 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:12:59.878 ************************************ 00:12:59.878 START TEST thread_poller_perf 00:12:59.878 ************************************ 00:12:59.878 12:32:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:59.878 [2024-10-01 12:32:42.245289] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:59.878 [2024-10-01 12:32:42.245424] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106601 ] 00:13:00.137 [2024-10-01 12:32:42.409696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.137 [2024-10-01 12:32:42.615060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.137 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:01.515 ====================================== 00:13:01.515 busy:2505089824 (cyc) 00:13:01.515 total_run_count: 390000 00:13:01.515 tsc_hz: 2490000000 (cyc) 00:13:01.515 ====================================== 00:13:01.515 poller_cost: 6423 (cyc), 2579 (nsec) 00:13:01.515 00:13:01.515 real 0m1.824s 00:13:01.515 user 0m1.599s 00:13:01.515 sys 0m0.124s 00:13:01.515 12:32:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.515 12:32:44 -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 ************************************ 00:13:01.515 END TEST thread_poller_perf 00:13:01.515 ************************************ 00:13:01.775 12:32:44 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:01.775 12:32:44 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:13:01.775 12:32:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:01.775 12:32:44 -- common/autotest_common.sh@10 -- # set +x 00:13:01.775 ************************************ 00:13:01.775 START TEST thread_poller_perf 00:13:01.775 ************************************ 00:13:01.775 12:32:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:01.775 [2024-10-01 12:32:44.141863] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:01.775 [2024-10-01 12:32:44.141990] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106648 ] 00:13:02.035 [2024-10-01 12:32:44.308296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.035 [2024-10-01 12:32:44.507119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.035 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:03.412 ====================================== 00:13:03.412 busy:2495027012 (cyc) 00:13:03.412 total_run_count: 5203000 00:13:03.412 tsc_hz: 2490000000 (cyc) 00:13:03.412 ====================================== 00:13:03.412 poller_cost: 479 (cyc), 192 (nsec) 00:13:03.412 ************************************ 00:13:03.412 END TEST thread_poller_perf 00:13:03.412 ************************************ 00:13:03.412 00:13:03.412 real 0m1.812s 00:13:03.412 user 0m1.580s 00:13:03.412 sys 0m0.132s 00:13:03.412 12:32:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.412 12:32:45 -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 12:32:45 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:13:03.672 12:32:45 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:13:03.672 12:32:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:03.672 12:32:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:03.672 12:32:45 -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 ************************************ 00:13:03.672 START TEST thread_spdk_lock 00:13:03.672 ************************************ 00:13:03.672 12:32:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:13:03.672 [2024-10-01 12:32:46.032731] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:03.672 [2024-10-01 12:32:46.032984] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106696 ] 00:13:03.931 [2024-10-01 12:32:46.204693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:03.931 [2024-10-01 12:32:46.399153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.931 [2024-10-01 12:32:46.399168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.498 [2024-10-01 12:32:46.909962] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:04.498 [2024-10-01 12:32:46.910191] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:13:04.499 [2024-10-01 12:32:46.910263] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55f8c1263ac0 00:13:04.499 [2024-10-01 12:32:46.919440] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:04.499 [2024-10-01 12:32:46.919621] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:04.499 [2024-10-01 12:32:46.919725] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:13:05.068 Starting test contend 00:13:05.068 Worker Delay Wait us Hold us Total us 00:13:05.068 0 3 136625 191148 327773 00:13:05.068 1 5 74756 291497 366254 00:13:05.068 PASS test contend 00:13:05.068 Starting test hold_by_poller 00:13:05.068 PASS test hold_by_poller 00:13:05.068 Starting test hold_by_message 00:13:05.068 PASS test hold_by_message 00:13:05.068 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:13:05.068 100014 assertions passed 00:13:05.068 0 assertions failed 00:13:05.068 ************************************ 00:13:05.068 END TEST thread_spdk_lock 00:13:05.068 ************************************ 00:13:05.068 00:13:05.068 real 0m1.338s 00:13:05.068 user 0m1.630s 00:13:05.068 sys 0m0.128s 00:13:05.068 12:32:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.068 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.068 ************************************ 00:13:05.068 END TEST thread 00:13:05.068 ************************************ 00:13:05.068 00:13:05.068 real 0m5.295s 00:13:05.068 user 0m4.972s 00:13:05.068 sys 0m0.555s 00:13:05.068 12:32:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.068 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.068 12:32:47 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:13:05.068 12:32:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:05.068 12:32:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:05.068 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.068 ************************************ 00:13:05.068 START TEST accel 00:13:05.068 ************************************ 00:13:05.068 12:32:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:13:05.068 * Looking for test storage... 00:13:05.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:05.068 12:32:47 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:13:05.068 12:32:47 -- accel/accel.sh@74 -- # get_expected_opcs 00:13:05.068 12:32:47 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:05.068 12:32:47 -- accel/accel.sh@59 -- # spdk_tgt_pid=106789 00:13:05.068 12:32:47 -- accel/accel.sh@60 -- # waitforlisten 106789 00:13:05.068 12:32:47 -- common/autotest_common.sh@819 -- # '[' -z 106789 ']' 00:13:05.068 12:32:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.068 12:32:47 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:13:05.068 12:32:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:05.068 12:32:47 -- accel/accel.sh@58 -- # build_accel_config 00:13:05.068 12:32:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.068 12:32:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:05.068 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.068 12:32:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:05.068 12:32:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:05.068 12:32:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:05.068 12:32:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:05.068 12:32:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:05.068 12:32:47 -- accel/accel.sh@41 -- # local IFS=, 00:13:05.068 12:32:47 -- accel/accel.sh@42 -- # jq -r . 00:13:05.327 [2024-10-01 12:32:47.626533] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:05.327 [2024-10-01 12:32:47.626815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106789 ] 00:13:05.327 [2024-10-01 12:32:47.793988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.586 [2024-10-01 12:32:47.982508] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:05.587 [2024-10-01 12:32:47.982882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.999 12:32:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:06.999 12:32:49 -- common/autotest_common.sh@852 -- # return 0 00:13:06.999 12:32:49 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:13:06.999 12:32:49 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:13:06.999 12:32:49 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:13:06.999 12:32:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.999 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:13:06.999 12:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # IFS== 00:13:06.999 12:32:49 -- accel/accel.sh@64 -- # read -r opc module 00:13:06.999 12:32:49 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:13:06.999 12:32:49 -- accel/accel.sh@67 -- # killprocess 106789 00:13:06.999 12:32:49 -- common/autotest_common.sh@926 -- # '[' -z 106789 ']' 00:13:06.999 12:32:49 -- common/autotest_common.sh@930 -- # kill -0 106789 00:13:06.999 12:32:49 -- common/autotest_common.sh@931 -- # uname 00:13:06.999 12:32:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:06.999 12:32:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106789 00:13:06.999 12:32:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:06.999 killing process with pid 106789 00:13:06.999 12:32:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:06.999 12:32:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106789' 00:13:06.999 12:32:49 -- common/autotest_common.sh@945 -- # kill 106789 00:13:06.999 12:32:49 -- common/autotest_common.sh@950 -- # wait 106789 00:13:09.535 12:32:51 -- accel/accel.sh@68 -- # trap - ERR 00:13:09.535 12:32:51 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:13:09.535 12:32:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:09.535 12:32:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.535 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.535 12:32:51 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:13:09.535 12:32:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:13:09.535 12:32:51 -- accel/accel.sh@12 -- # build_accel_config 00:13:09.535 12:32:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:09.535 12:32:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:09.535 12:32:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:09.535 12:32:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:09.535 12:32:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:09.535 12:32:51 -- accel/accel.sh@41 -- # local IFS=, 00:13:09.535 12:32:51 -- accel/accel.sh@42 -- # jq -r . 00:13:09.535 12:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.535 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.535 12:32:51 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:13:09.535 12:32:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:09.535 12:32:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.535 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.535 ************************************ 00:13:09.535 START TEST accel_missing_filename 00:13:09.535 ************************************ 00:13:09.535 12:32:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:13:09.535 12:32:51 -- common/autotest_common.sh@640 -- # local es=0 00:13:09.535 12:32:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:13:09.535 12:32:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:13:09.535 12:32:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:09.535 12:32:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:13:09.535 12:32:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:09.535 12:32:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:13:09.535 12:32:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:13:09.535 12:32:51 -- accel/accel.sh@12 -- # build_accel_config 00:13:09.535 12:32:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:09.535 12:32:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:09.535 12:32:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:09.535 12:32:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:09.535 12:32:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:09.535 12:32:51 -- accel/accel.sh@41 -- # local IFS=, 00:13:09.535 12:32:51 -- accel/accel.sh@42 -- # jq -r . 00:13:09.535 [2024-10-01 12:32:51.711377] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:09.535 [2024-10-01 12:32:51.711623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106881 ] 00:13:09.535 [2024-10-01 12:32:51.876425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.793 [2024-10-01 12:32:52.072758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.793 [2024-10-01 12:32:52.307800] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:10.730 [2024-10-01 12:32:52.938857] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:13:10.989 A filename is required. 00:13:10.989 ************************************ 00:13:10.989 END TEST accel_missing_filename 00:13:10.989 ************************************ 00:13:10.989 12:32:53 -- common/autotest_common.sh@643 -- # es=234 00:13:10.989 12:32:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:10.989 12:32:53 -- common/autotest_common.sh@652 -- # es=106 00:13:10.989 12:32:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:13:10.989 12:32:53 -- common/autotest_common.sh@660 -- # es=1 00:13:10.989 12:32:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:10.989 00:13:10.989 real 0m1.744s 00:13:10.989 user 0m1.505s 00:13:10.989 sys 0m0.184s 00:13:10.989 12:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.989 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:13:10.989 12:32:53 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:10.989 12:32:53 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:13:10.989 12:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:10.989 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:13:10.989 ************************************ 00:13:10.989 START TEST accel_compress_verify 00:13:10.989 ************************************ 00:13:10.989 12:32:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:10.989 12:32:53 -- common/autotest_common.sh@640 -- # local es=0 00:13:10.989 12:32:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:10.989 12:32:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:13:10.989 12:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:10.989 12:32:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:13:10.989 12:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:10.989 12:32:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:10.989 12:32:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:10.989 12:32:53 -- accel/accel.sh@12 -- # build_accel_config 00:13:10.989 12:32:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:10.989 12:32:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:10.989 12:32:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:10.989 12:32:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:10.989 12:32:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:10.989 12:32:53 -- accel/accel.sh@41 -- # local IFS=, 00:13:10.989 12:32:53 -- accel/accel.sh@42 -- # jq -r . 00:13:11.249 [2024-10-01 12:32:53.526379] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:11.249 [2024-10-01 12:32:53.526611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106925 ] 00:13:11.249 [2024-10-01 12:32:53.692631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.508 [2024-10-01 12:32:53.889889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.766 [2024-10-01 12:32:54.124074] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:12.334 [2024-10-01 12:32:54.753432] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:13:12.902 00:13:12.902 Compression does not support the verify option, aborting. 00:13:12.902 ************************************ 00:13:12.902 END TEST accel_compress_verify 00:13:12.902 ************************************ 00:13:12.902 12:32:55 -- common/autotest_common.sh@643 -- # es=161 00:13:12.902 12:32:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:12.902 12:32:55 -- common/autotest_common.sh@652 -- # es=33 00:13:12.902 12:32:55 -- common/autotest_common.sh@653 -- # case "$es" in 00:13:12.902 12:32:55 -- common/autotest_common.sh@660 -- # es=1 00:13:12.902 12:32:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:12.902 00:13:12.902 real 0m1.742s 00:13:12.902 user 0m1.493s 00:13:12.902 sys 0m0.194s 00:13:12.902 12:32:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.902 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:12.902 12:32:55 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:13:12.902 12:32:55 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:12.902 12:32:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:12.902 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:12.902 ************************************ 00:13:12.902 START TEST accel_wrong_workload 00:13:12.902 ************************************ 00:13:12.902 12:32:55 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:13:12.902 12:32:55 -- common/autotest_common.sh@640 -- # local es=0 00:13:12.902 12:32:55 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:13:12.902 12:32:55 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:13:12.902 12:32:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:12.902 12:32:55 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:13:12.902 12:32:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:12.902 12:32:55 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:13:12.902 12:32:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:13:12.902 12:32:55 -- accel/accel.sh@12 -- # build_accel_config 00:13:12.902 12:32:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:12.902 12:32:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:12.902 12:32:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:12.902 12:32:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:12.902 12:32:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:12.902 12:32:55 -- accel/accel.sh@41 -- # local IFS=, 00:13:12.902 12:32:55 -- accel/accel.sh@42 -- # jq -r . 00:13:12.902 Unsupported workload type: foobar 00:13:12.902 [2024-10-01 12:32:55.347134] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:13:12.902 accel_perf options: 00:13:12.902 [-h help message] 00:13:12.902 [-q queue depth per core] 00:13:12.902 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:12.902 [-T number of threads per core 00:13:12.902 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:12.902 [-t time in seconds] 00:13:12.902 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:12.902 [ dif_verify, , dif_generate, dif_generate_copy 00:13:12.902 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:12.902 [-l for compress/decompress workloads, name of uncompressed input file 00:13:12.902 [-S for crc32c workload, use this seed value (default 0) 00:13:12.902 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:12.902 [-f for fill workload, use this BYTE value (default 255) 00:13:12.902 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:12.902 [-y verify result if this switch is on] 00:13:12.902 [-a tasks to allocate per core (default: same value as -q)] 00:13:12.902 Can be used to spread operations across a wider range of memory. 00:13:12.902 12:32:55 -- common/autotest_common.sh@643 -- # es=1 00:13:12.902 12:32:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:12.902 12:32:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:12.902 12:32:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:12.902 00:13:12.902 real 0m0.096s 00:13:12.902 user 0m0.096s 00:13:12.902 sys 0m0.059s 00:13:12.902 12:32:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.902 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:12.902 ************************************ 00:13:12.902 END TEST accel_wrong_workload 00:13:12.902 ************************************ 00:13:13.161 12:32:55 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:13:13.161 12:32:55 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:13:13.161 12:32:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.161 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:13.161 ************************************ 00:13:13.161 START TEST accel_negative_buffers 00:13:13.161 ************************************ 00:13:13.161 12:32:55 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:13:13.161 12:32:55 -- common/autotest_common.sh@640 -- # local es=0 00:13:13.161 12:32:55 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:13:13.161 12:32:55 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:13:13.161 12:32:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:13.161 12:32:55 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:13:13.161 12:32:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:13.161 12:32:55 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:13:13.161 12:32:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:13:13.161 12:32:55 -- accel/accel.sh@12 -- # build_accel_config 00:13:13.161 12:32:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:13.161 12:32:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.161 12:32:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.161 12:32:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:13.161 12:32:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:13.161 12:32:55 -- accel/accel.sh@41 -- # local IFS=, 00:13:13.161 12:32:55 -- accel/accel.sh@42 -- # jq -r . 00:13:13.161 -x option must be non-negative. 00:13:13.161 [2024-10-01 12:32:55.511095] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:13:13.161 accel_perf options: 00:13:13.161 [-h help message] 00:13:13.161 [-q queue depth per core] 00:13:13.161 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:13.161 [-T number of threads per core 00:13:13.161 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:13.161 [-t time in seconds] 00:13:13.161 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:13.161 [ dif_verify, , dif_generate, dif_generate_copy 00:13:13.161 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:13.161 [-l for compress/decompress workloads, name of uncompressed input file 00:13:13.161 [-S for crc32c workload, use this seed value (default 0) 00:13:13.161 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:13.161 [-f for fill workload, use this BYTE value (default 255) 00:13:13.161 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:13.161 [-y verify result if this switch is on] 00:13:13.161 [-a tasks to allocate per core (default: same value as -q)] 00:13:13.161 Can be used to spread operations across a wider range of memory. 00:13:13.161 12:32:55 -- common/autotest_common.sh@643 -- # es=1 00:13:13.161 12:32:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:13.161 12:32:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:13.161 12:32:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:13.161 00:13:13.161 real 0m0.087s 00:13:13.161 user 0m0.079s 00:13:13.161 sys 0m0.052s 00:13:13.161 12:32:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.161 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:13.161 ************************************ 00:13:13.161 END TEST accel_negative_buffers 00:13:13.161 ************************************ 00:13:13.161 12:32:55 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:13:13.161 12:32:55 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:13:13.161 12:32:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.161 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:13.161 ************************************ 00:13:13.161 START TEST accel_crc32c 00:13:13.161 ************************************ 00:13:13.161 12:32:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:13:13.161 12:32:55 -- accel/accel.sh@16 -- # local accel_opc 00:13:13.161 12:32:55 -- accel/accel.sh@17 -- # local accel_module 00:13:13.161 12:32:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:13.161 12:32:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:13.161 12:32:55 -- accel/accel.sh@12 -- # build_accel_config 00:13:13.161 12:32:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:13.161 12:32:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.161 12:32:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.161 12:32:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:13.161 12:32:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:13.161 12:32:55 -- accel/accel.sh@41 -- # local IFS=, 00:13:13.161 12:32:55 -- accel/accel.sh@42 -- # jq -r . 00:13:13.161 [2024-10-01 12:32:55.667707] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:13.161 [2024-10-01 12:32:55.668102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107018 ] 00:13:13.420 [2024-10-01 12:32:55.832555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.679 [2024-10-01 12:32:56.029843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.214 12:32:58 -- accel/accel.sh@18 -- # out=' 00:13:16.214 SPDK Configuration: 00:13:16.214 Core mask: 0x1 00:13:16.214 00:13:16.214 Accel Perf Configuration: 00:13:16.214 Workload Type: crc32c 00:13:16.214 CRC-32C seed: 32 00:13:16.214 Transfer size: 4096 bytes 00:13:16.214 Vector count 1 00:13:16.214 Module: software 00:13:16.214 Queue depth: 32 00:13:16.214 Allocate depth: 32 00:13:16.214 # threads/core: 1 00:13:16.214 Run time: 1 seconds 00:13:16.214 Verify: Yes 00:13:16.214 00:13:16.214 Running for 1 seconds... 00:13:16.214 00:13:16.214 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:16.214 ------------------------------------------------------------------------------------ 00:13:16.214 0,0 538560/s 2103 MiB/s 0 0 00:13:16.214 ==================================================================================== 00:13:16.214 Total 538560/s 2103 MiB/s 0 0' 00:13:16.214 12:32:58 -- accel/accel.sh@20 -- # IFS=: 00:13:16.214 12:32:58 -- accel/accel.sh@20 -- # read -r var val 00:13:16.214 12:32:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:16.214 12:32:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:16.214 12:32:58 -- accel/accel.sh@12 -- # build_accel_config 00:13:16.214 12:32:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:16.214 12:32:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:16.214 12:32:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:16.214 12:32:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:16.214 12:32:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:16.214 12:32:58 -- accel/accel.sh@41 -- # local IFS=, 00:13:16.214 12:32:58 -- accel/accel.sh@42 -- # jq -r . 00:13:16.214 [2024-10-01 12:32:58.417229] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:16.214 [2024-10-01 12:32:58.417489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107060 ] 00:13:16.214 [2024-10-01 12:32:58.581827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.473 [2024-10-01 12:32:58.803809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val= 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val= 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val=0x1 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val= 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val= 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val=crc32c 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val=32 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val= 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val=software 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@23 -- # accel_module=software 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val=32 00:13:16.731 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.731 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.731 12:32:59 -- accel/accel.sh@21 -- # val=32 00:13:16.732 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.732 12:32:59 -- accel/accel.sh@21 -- # val=1 00:13:16.732 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.732 12:32:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:16.732 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.732 12:32:59 -- accel/accel.sh@21 -- # val=Yes 00:13:16.732 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.732 12:32:59 -- accel/accel.sh@21 -- # val= 00:13:16.732 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:16.732 12:32:59 -- accel/accel.sh@21 -- # val= 00:13:16.732 12:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # IFS=: 00:13:16.732 12:32:59 -- accel/accel.sh@20 -- # read -r var val 00:13:18.639 12:33:01 -- accel/accel.sh@21 -- # val= 00:13:18.639 12:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # IFS=: 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # read -r var val 00:13:18.639 12:33:01 -- accel/accel.sh@21 -- # val= 00:13:18.639 12:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # IFS=: 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # read -r var val 00:13:18.639 12:33:01 -- accel/accel.sh@21 -- # val= 00:13:18.639 12:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # IFS=: 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # read -r var val 00:13:18.639 12:33:01 -- accel/accel.sh@21 -- # val= 00:13:18.639 12:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # IFS=: 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # read -r var val 00:13:18.639 12:33:01 -- accel/accel.sh@21 -- # val= 00:13:18.639 12:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # IFS=: 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # read -r var val 00:13:18.639 12:33:01 -- accel/accel.sh@21 -- # val= 00:13:18.639 12:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # IFS=: 00:13:18.639 12:33:01 -- accel/accel.sh@20 -- # read -r var val 00:13:18.898 ************************************ 00:13:18.898 END TEST accel_crc32c 00:13:18.898 ************************************ 00:13:18.898 12:33:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:18.898 12:33:01 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:13:18.898 12:33:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:18.898 00:13:18.898 real 0m5.556s 00:13:18.898 user 0m4.987s 00:13:18.898 sys 0m0.383s 00:13:18.898 12:33:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.898 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:18.898 12:33:01 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:13:18.898 12:33:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:13:18.898 12:33:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:18.898 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:18.898 ************************************ 00:13:18.898 START TEST accel_crc32c_C2 00:13:18.898 ************************************ 00:13:18.898 12:33:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:13:18.898 12:33:01 -- accel/accel.sh@16 -- # local accel_opc 00:13:18.898 12:33:01 -- accel/accel.sh@17 -- # local accel_module 00:13:18.898 12:33:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:13:18.898 12:33:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:13:18.898 12:33:01 -- accel/accel.sh@12 -- # build_accel_config 00:13:18.898 12:33:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:18.898 12:33:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:18.898 12:33:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:18.898 12:33:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:18.898 12:33:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:18.898 12:33:01 -- accel/accel.sh@41 -- # local IFS=, 00:13:18.898 12:33:01 -- accel/accel.sh@42 -- # jq -r . 00:13:18.898 [2024-10-01 12:33:01.297378] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:18.898 [2024-10-01 12:33:01.297617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107117 ] 00:13:19.157 [2024-10-01 12:33:01.462066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.157 [2024-10-01 12:33:01.661208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.724 12:33:04 -- accel/accel.sh@18 -- # out=' 00:13:21.724 SPDK Configuration: 00:13:21.724 Core mask: 0x1 00:13:21.724 00:13:21.724 Accel Perf Configuration: 00:13:21.724 Workload Type: crc32c 00:13:21.724 CRC-32C seed: 0 00:13:21.724 Transfer size: 4096 bytes 00:13:21.724 Vector count 2 00:13:21.724 Module: software 00:13:21.724 Queue depth: 32 00:13:21.724 Allocate depth: 32 00:13:21.724 # threads/core: 1 00:13:21.724 Run time: 1 seconds 00:13:21.724 Verify: Yes 00:13:21.724 00:13:21.724 Running for 1 seconds... 00:13:21.724 00:13:21.724 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:21.724 ------------------------------------------------------------------------------------ 00:13:21.724 0,0 426016/s 3328 MiB/s 0 0 00:13:21.724 ==================================================================================== 00:13:21.724 Total 426016/s 1664 MiB/s 0 0' 00:13:21.724 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:21.724 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:21.724 12:33:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:13:21.724 12:33:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:13:21.724 12:33:04 -- accel/accel.sh@12 -- # build_accel_config 00:13:21.724 12:33:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:21.724 12:33:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:21.724 12:33:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:21.724 12:33:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:21.724 12:33:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:21.724 12:33:04 -- accel/accel.sh@41 -- # local IFS=, 00:13:21.724 12:33:04 -- accel/accel.sh@42 -- # jq -r . 00:13:21.724 [2024-10-01 12:33:04.065003] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:21.724 [2024-10-01 12:33:04.065271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107166 ] 00:13:21.724 [2024-10-01 12:33:04.231575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.984 [2024-10-01 12:33:04.451707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val= 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val= 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val=0x1 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val= 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val= 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val=crc32c 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val=0 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val= 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val=software 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@23 -- # accel_module=software 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val=32 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val=32 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val=1 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val=Yes 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val= 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:22.243 12:33:04 -- accel/accel.sh@21 -- # val= 00:13:22.243 12:33:04 -- accel/accel.sh@22 -- # case "$var" in 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # IFS=: 00:13:22.243 12:33:04 -- accel/accel.sh@20 -- # read -r var val 00:13:24.779 12:33:06 -- accel/accel.sh@21 -- # val= 00:13:24.779 12:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:13:24.779 12:33:06 -- accel/accel.sh@20 -- # IFS=: 00:13:24.779 12:33:06 -- accel/accel.sh@20 -- # read -r var val 00:13:24.779 12:33:06 -- accel/accel.sh@21 -- # val= 00:13:24.779 12:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:13:24.779 12:33:06 -- accel/accel.sh@20 -- # IFS=: 00:13:24.779 12:33:06 -- accel/accel.sh@20 -- # read -r var val 00:13:24.779 12:33:06 -- accel/accel.sh@21 -- # val= 00:13:24.779 12:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:13:24.779 12:33:06 -- accel/accel.sh@20 -- # IFS=: 00:13:24.779 12:33:06 -- accel/accel.sh@20 -- # read -r var val 00:13:24.779 12:33:06 -- accel/accel.sh@21 -- # val= 00:13:24.779 12:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:13:24.779 12:33:06 -- accel/accel.sh@20 -- # IFS=: 00:13:24.780 12:33:06 -- accel/accel.sh@20 -- # read -r var val 00:13:24.780 12:33:06 -- accel/accel.sh@21 -- # val= 00:13:24.780 12:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:13:24.780 12:33:06 -- accel/accel.sh@20 -- # IFS=: 00:13:24.780 12:33:06 -- accel/accel.sh@20 -- # read -r var val 00:13:24.780 12:33:06 -- accel/accel.sh@21 -- # val= 00:13:24.780 12:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:13:24.780 12:33:06 -- accel/accel.sh@20 -- # IFS=: 00:13:24.780 12:33:06 -- accel/accel.sh@20 -- # read -r var val 00:13:24.780 ************************************ 00:13:24.780 END TEST accel_crc32c_C2 00:13:24.780 ************************************ 00:13:24.780 12:33:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:24.780 12:33:06 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:13:24.780 12:33:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:24.780 00:13:24.780 real 0m5.562s 00:13:24.780 user 0m5.013s 00:13:24.780 sys 0m0.371s 00:13:24.780 12:33:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.780 12:33:06 -- common/autotest_common.sh@10 -- # set +x 00:13:24.780 12:33:06 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:13:24.780 12:33:06 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:24.780 12:33:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:24.780 12:33:06 -- common/autotest_common.sh@10 -- # set +x 00:13:24.780 ************************************ 00:13:24.780 START TEST accel_copy 00:13:24.780 ************************************ 00:13:24.780 12:33:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:13:24.780 12:33:06 -- accel/accel.sh@16 -- # local accel_opc 00:13:24.780 12:33:06 -- accel/accel.sh@17 -- # local accel_module 00:13:24.780 12:33:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:13:24.780 12:33:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:13:24.780 12:33:06 -- accel/accel.sh@12 -- # build_accel_config 00:13:24.780 12:33:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:24.780 12:33:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:24.780 12:33:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:24.780 12:33:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:24.780 12:33:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:24.780 12:33:06 -- accel/accel.sh@41 -- # local IFS=, 00:13:24.780 12:33:06 -- accel/accel.sh@42 -- # jq -r . 00:13:24.780 [2024-10-01 12:33:06.930136] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:24.780 [2024-10-01 12:33:06.930417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107211 ] 00:13:24.780 [2024-10-01 12:33:07.095038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.780 [2024-10-01 12:33:07.299786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.311 12:33:09 -- accel/accel.sh@18 -- # out=' 00:13:27.311 SPDK Configuration: 00:13:27.311 Core mask: 0x1 00:13:27.311 00:13:27.311 Accel Perf Configuration: 00:13:27.311 Workload Type: copy 00:13:27.311 Transfer size: 4096 bytes 00:13:27.311 Vector count 1 00:13:27.311 Module: software 00:13:27.311 Queue depth: 32 00:13:27.311 Allocate depth: 32 00:13:27.311 # threads/core: 1 00:13:27.311 Run time: 1 seconds 00:13:27.311 Verify: Yes 00:13:27.311 00:13:27.311 Running for 1 seconds... 00:13:27.311 00:13:27.311 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:27.311 ------------------------------------------------------------------------------------ 00:13:27.311 0,0 354112/s 1383 MiB/s 0 0 00:13:27.311 ==================================================================================== 00:13:27.311 Total 354112/s 1383 MiB/s 0 0' 00:13:27.311 12:33:09 -- accel/accel.sh@20 -- # IFS=: 00:13:27.311 12:33:09 -- accel/accel.sh@20 -- # read -r var val 00:13:27.311 12:33:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:13:27.311 12:33:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:13:27.311 12:33:09 -- accel/accel.sh@12 -- # build_accel_config 00:13:27.311 12:33:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:27.311 12:33:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:27.311 12:33:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:27.311 12:33:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:27.311 12:33:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:27.311 12:33:09 -- accel/accel.sh@41 -- # local IFS=, 00:13:27.311 12:33:09 -- accel/accel.sh@42 -- # jq -r . 00:13:27.311 [2024-10-01 12:33:09.660652] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:27.311 [2024-10-01 12:33:09.660899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107258 ] 00:13:27.311 [2024-10-01 12:33:09.826545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.570 [2024-10-01 12:33:10.042532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val= 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val= 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val=0x1 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val= 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val= 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val=copy 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@24 -- # accel_opc=copy 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val= 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val=software 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@23 -- # accel_module=software 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val=32 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val=32 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val=1 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val=Yes 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val= 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:27.829 12:33:10 -- accel/accel.sh@21 -- # val= 00:13:27.829 12:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # IFS=: 00:13:27.829 12:33:10 -- accel/accel.sh@20 -- # read -r var val 00:13:30.466 12:33:12 -- accel/accel.sh@21 -- # val= 00:13:30.466 12:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # IFS=: 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # read -r var val 00:13:30.466 12:33:12 -- accel/accel.sh@21 -- # val= 00:13:30.466 12:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # IFS=: 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # read -r var val 00:13:30.466 12:33:12 -- accel/accel.sh@21 -- # val= 00:13:30.466 12:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # IFS=: 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # read -r var val 00:13:30.466 12:33:12 -- accel/accel.sh@21 -- # val= 00:13:30.466 12:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # IFS=: 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # read -r var val 00:13:30.466 12:33:12 -- accel/accel.sh@21 -- # val= 00:13:30.466 12:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # IFS=: 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # read -r var val 00:13:30.466 12:33:12 -- accel/accel.sh@21 -- # val= 00:13:30.466 12:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # IFS=: 00:13:30.466 12:33:12 -- accel/accel.sh@20 -- # read -r var val 00:13:30.466 12:33:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:30.466 12:33:12 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:13:30.466 12:33:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:30.466 00:13:30.466 real 0m5.524s 00:13:30.466 user 0m4.996s 00:13:30.466 sys 0m0.337s 00:13:30.466 12:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.466 ************************************ 00:13:30.466 END TEST accel_copy 00:13:30.466 ************************************ 00:13:30.466 12:33:12 -- common/autotest_common.sh@10 -- # set +x 00:13:30.466 12:33:12 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:30.466 12:33:12 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:30.466 12:33:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:30.466 12:33:12 -- common/autotest_common.sh@10 -- # set +x 00:13:30.466 ************************************ 00:13:30.466 START TEST accel_fill 00:13:30.466 ************************************ 00:13:30.466 12:33:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:30.466 12:33:12 -- accel/accel.sh@16 -- # local accel_opc 00:13:30.466 12:33:12 -- accel/accel.sh@17 -- # local accel_module 00:13:30.466 12:33:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:30.466 12:33:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:30.466 12:33:12 -- accel/accel.sh@12 -- # build_accel_config 00:13:30.466 12:33:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:30.466 12:33:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:30.466 12:33:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:30.466 12:33:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:30.466 12:33:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:30.466 12:33:12 -- accel/accel.sh@41 -- # local IFS=, 00:13:30.466 12:33:12 -- accel/accel.sh@42 -- # jq -r . 00:13:30.466 [2024-10-01 12:33:12.516557] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:30.466 [2024-10-01 12:33:12.516855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107310 ] 00:13:30.466 [2024-10-01 12:33:12.695376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.466 [2024-10-01 12:33:12.906966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.010 12:33:15 -- accel/accel.sh@18 -- # out=' 00:13:33.010 SPDK Configuration: 00:13:33.010 Core mask: 0x1 00:13:33.010 00:13:33.010 Accel Perf Configuration: 00:13:33.010 Workload Type: fill 00:13:33.010 Fill pattern: 0x80 00:13:33.010 Transfer size: 4096 bytes 00:13:33.010 Vector count 1 00:13:33.010 Module: software 00:13:33.010 Queue depth: 64 00:13:33.010 Allocate depth: 64 00:13:33.010 # threads/core: 1 00:13:33.010 Run time: 1 seconds 00:13:33.010 Verify: Yes 00:13:33.010 00:13:33.010 Running for 1 seconds... 00:13:33.010 00:13:33.010 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:33.010 ------------------------------------------------------------------------------------ 00:13:33.010 0,0 574080/s 2242 MiB/s 0 0 00:13:33.010 ==================================================================================== 00:13:33.010 Total 574080/s 2242 MiB/s 0 0' 00:13:33.010 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.010 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.010 12:33:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:33.010 12:33:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:33.010 12:33:15 -- accel/accel.sh@12 -- # build_accel_config 00:13:33.010 12:33:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:33.010 12:33:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:33.010 12:33:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:33.010 12:33:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:33.010 12:33:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:33.010 12:33:15 -- accel/accel.sh@41 -- # local IFS=, 00:13:33.010 12:33:15 -- accel/accel.sh@42 -- # jq -r . 00:13:33.010 [2024-10-01 12:33:15.311483] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:33.010 [2024-10-01 12:33:15.311824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107353 ] 00:13:33.010 [2024-10-01 12:33:15.477672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.270 [2024-10-01 12:33:15.704336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val= 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val= 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val=0x1 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val= 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val= 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val=fill 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@24 -- # accel_opc=fill 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val=0x80 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val= 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val=software 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@23 -- # accel_module=software 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val=64 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val=64 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val=1 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val=Yes 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val= 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:33.529 12:33:15 -- accel/accel.sh@21 -- # val= 00:13:33.529 12:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # IFS=: 00:13:33.529 12:33:15 -- accel/accel.sh@20 -- # read -r var val 00:13:36.066 12:33:18 -- accel/accel.sh@21 -- # val= 00:13:36.066 12:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # IFS=: 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # read -r var val 00:13:36.066 12:33:18 -- accel/accel.sh@21 -- # val= 00:13:36.066 12:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # IFS=: 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # read -r var val 00:13:36.066 12:33:18 -- accel/accel.sh@21 -- # val= 00:13:36.066 12:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # IFS=: 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # read -r var val 00:13:36.066 12:33:18 -- accel/accel.sh@21 -- # val= 00:13:36.066 12:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # IFS=: 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # read -r var val 00:13:36.066 12:33:18 -- accel/accel.sh@21 -- # val= 00:13:36.066 12:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # IFS=: 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # read -r var val 00:13:36.066 12:33:18 -- accel/accel.sh@21 -- # val= 00:13:36.066 12:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # IFS=: 00:13:36.066 12:33:18 -- accel/accel.sh@20 -- # read -r var val 00:13:36.066 ************************************ 00:13:36.066 END TEST accel_fill 00:13:36.066 ************************************ 00:13:36.066 12:33:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:36.066 12:33:18 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:13:36.066 12:33:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:36.066 00:13:36.066 real 0m5.594s 00:13:36.066 user 0m5.081s 00:13:36.066 sys 0m0.340s 00:13:36.066 12:33:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.066 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.066 12:33:18 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:13:36.066 12:33:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:36.066 12:33:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:36.066 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.066 ************************************ 00:13:36.066 START TEST accel_copy_crc32c 00:13:36.066 ************************************ 00:13:36.066 12:33:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:13:36.066 12:33:18 -- accel/accel.sh@16 -- # local accel_opc 00:13:36.066 12:33:18 -- accel/accel.sh@17 -- # local accel_module 00:13:36.066 12:33:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:13:36.066 12:33:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:13:36.066 12:33:18 -- accel/accel.sh@12 -- # build_accel_config 00:13:36.066 12:33:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:36.066 12:33:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:36.066 12:33:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:36.066 12:33:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:36.067 12:33:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:36.067 12:33:18 -- accel/accel.sh@41 -- # local IFS=, 00:13:36.067 12:33:18 -- accel/accel.sh@42 -- # jq -r . 00:13:36.067 [2024-10-01 12:33:18.192060] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:36.067 [2024-10-01 12:33:18.192303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107410 ] 00:13:36.067 [2024-10-01 12:33:18.357469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.067 [2024-10-01 12:33:18.550876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.601 12:33:20 -- accel/accel.sh@18 -- # out=' 00:13:38.601 SPDK Configuration: 00:13:38.601 Core mask: 0x1 00:13:38.601 00:13:38.601 Accel Perf Configuration: 00:13:38.601 Workload Type: copy_crc32c 00:13:38.601 CRC-32C seed: 0 00:13:38.601 Vector size: 4096 bytes 00:13:38.601 Transfer size: 4096 bytes 00:13:38.601 Vector count 1 00:13:38.601 Module: software 00:13:38.601 Queue depth: 32 00:13:38.601 Allocate depth: 32 00:13:38.601 # threads/core: 1 00:13:38.601 Run time: 1 seconds 00:13:38.601 Verify: Yes 00:13:38.601 00:13:38.601 Running for 1 seconds... 00:13:38.601 00:13:38.601 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:38.601 ------------------------------------------------------------------------------------ 00:13:38.601 0,0 285792/s 1116 MiB/s 0 0 00:13:38.601 ==================================================================================== 00:13:38.601 Total 285792/s 1116 MiB/s 0 0' 00:13:38.601 12:33:20 -- accel/accel.sh@20 -- # IFS=: 00:13:38.601 12:33:20 -- accel/accel.sh@20 -- # read -r var val 00:13:38.601 12:33:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:13:38.601 12:33:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:13:38.601 12:33:20 -- accel/accel.sh@12 -- # build_accel_config 00:13:38.601 12:33:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:38.601 12:33:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:38.602 12:33:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:38.602 12:33:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:38.602 12:33:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:38.602 12:33:20 -- accel/accel.sh@41 -- # local IFS=, 00:13:38.602 12:33:20 -- accel/accel.sh@42 -- # jq -r . 00:13:38.602 [2024-10-01 12:33:20.927644] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:38.602 [2024-10-01 12:33:20.927913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107452 ] 00:13:38.602 [2024-10-01 12:33:21.092918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.861 [2024-10-01 12:33:21.320589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.120 12:33:21 -- accel/accel.sh@21 -- # val= 00:13:39.120 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.120 12:33:21 -- accel/accel.sh@21 -- # val= 00:13:39.120 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.120 12:33:21 -- accel/accel.sh@21 -- # val=0x1 00:13:39.120 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.120 12:33:21 -- accel/accel.sh@21 -- # val= 00:13:39.120 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.120 12:33:21 -- accel/accel.sh@21 -- # val= 00:13:39.120 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.120 12:33:21 -- accel/accel.sh@21 -- # val=copy_crc32c 00:13:39.120 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.120 12:33:21 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.120 12:33:21 -- accel/accel.sh@21 -- # val=0 00:13:39.120 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.120 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.120 12:33:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:39.120 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.121 12:33:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:39.121 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.121 12:33:21 -- accel/accel.sh@21 -- # val= 00:13:39.121 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.121 12:33:21 -- accel/accel.sh@21 -- # val=software 00:13:39.121 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@23 -- # accel_module=software 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.121 12:33:21 -- accel/accel.sh@21 -- # val=32 00:13:39.121 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.121 12:33:21 -- accel/accel.sh@21 -- # val=32 00:13:39.121 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.121 12:33:21 -- accel/accel.sh@21 -- # val=1 00:13:39.121 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.121 12:33:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:39.121 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.121 12:33:21 -- accel/accel.sh@21 -- # val=Yes 00:13:39.121 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.121 12:33:21 -- accel/accel.sh@21 -- # val= 00:13:39.121 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:39.121 12:33:21 -- accel/accel.sh@21 -- # val= 00:13:39.121 12:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # IFS=: 00:13:39.121 12:33:21 -- accel/accel.sh@20 -- # read -r var val 00:13:41.676 12:33:23 -- accel/accel.sh@21 -- # val= 00:13:41.676 12:33:23 -- accel/accel.sh@22 -- # case "$var" in 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # IFS=: 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # read -r var val 00:13:41.676 12:33:23 -- accel/accel.sh@21 -- # val= 00:13:41.676 12:33:23 -- accel/accel.sh@22 -- # case "$var" in 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # IFS=: 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # read -r var val 00:13:41.676 12:33:23 -- accel/accel.sh@21 -- # val= 00:13:41.676 12:33:23 -- accel/accel.sh@22 -- # case "$var" in 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # IFS=: 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # read -r var val 00:13:41.676 12:33:23 -- accel/accel.sh@21 -- # val= 00:13:41.676 12:33:23 -- accel/accel.sh@22 -- # case "$var" in 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # IFS=: 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # read -r var val 00:13:41.676 12:33:23 -- accel/accel.sh@21 -- # val= 00:13:41.676 12:33:23 -- accel/accel.sh@22 -- # case "$var" in 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # IFS=: 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # read -r var val 00:13:41.676 12:33:23 -- accel/accel.sh@21 -- # val= 00:13:41.676 12:33:23 -- accel/accel.sh@22 -- # case "$var" in 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # IFS=: 00:13:41.676 12:33:23 -- accel/accel.sh@20 -- # read -r var val 00:13:41.676 ************************************ 00:13:41.676 END TEST accel_copy_crc32c 00:13:41.676 ************************************ 00:13:41.677 12:33:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:41.677 12:33:23 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:13:41.677 12:33:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:41.677 00:13:41.677 real 0m5.542s 00:13:41.677 user 0m4.983s 00:13:41.677 sys 0m0.371s 00:13:41.677 12:33:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.677 12:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.677 12:33:23 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:13:41.677 12:33:23 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:13:41.677 12:33:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:41.677 12:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.677 ************************************ 00:13:41.677 START TEST accel_copy_crc32c_C2 00:13:41.677 ************************************ 00:13:41.677 12:33:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:13:41.677 12:33:23 -- accel/accel.sh@16 -- # local accel_opc 00:13:41.677 12:33:23 -- accel/accel.sh@17 -- # local accel_module 00:13:41.677 12:33:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:13:41.677 12:33:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:13:41.677 12:33:23 -- accel/accel.sh@12 -- # build_accel_config 00:13:41.677 12:33:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:41.677 12:33:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:41.677 12:33:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:41.677 12:33:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:41.677 12:33:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:41.677 12:33:23 -- accel/accel.sh@41 -- # local IFS=, 00:13:41.677 12:33:23 -- accel/accel.sh@42 -- # jq -r . 00:13:41.677 [2024-10-01 12:33:23.812216] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:41.677 [2024-10-01 12:33:23.812445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107509 ] 00:13:41.677 [2024-10-01 12:33:23.978042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.677 [2024-10-01 12:33:24.187636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.211 12:33:26 -- accel/accel.sh@18 -- # out=' 00:13:44.211 SPDK Configuration: 00:13:44.211 Core mask: 0x1 00:13:44.211 00:13:44.211 Accel Perf Configuration: 00:13:44.211 Workload Type: copy_crc32c 00:13:44.211 CRC-32C seed: 0 00:13:44.211 Vector size: 4096 bytes 00:13:44.211 Transfer size: 8192 bytes 00:13:44.211 Vector count 2 00:13:44.211 Module: software 00:13:44.211 Queue depth: 32 00:13:44.211 Allocate depth: 32 00:13:44.211 # threads/core: 1 00:13:44.211 Run time: 1 seconds 00:13:44.211 Verify: Yes 00:13:44.211 00:13:44.211 Running for 1 seconds... 00:13:44.211 00:13:44.211 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:44.211 ------------------------------------------------------------------------------------ 00:13:44.211 0,0 198048/s 1547 MiB/s 0 0 00:13:44.211 ==================================================================================== 00:13:44.211 Total 198048/s 773 MiB/s 0 0' 00:13:44.211 12:33:26 -- accel/accel.sh@20 -- # IFS=: 00:13:44.211 12:33:26 -- accel/accel.sh@20 -- # read -r var val 00:13:44.211 12:33:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:13:44.211 12:33:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:13:44.211 12:33:26 -- accel/accel.sh@12 -- # build_accel_config 00:13:44.211 12:33:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:44.211 12:33:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:44.211 12:33:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:44.211 12:33:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:44.211 12:33:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:44.211 12:33:26 -- accel/accel.sh@41 -- # local IFS=, 00:13:44.211 12:33:26 -- accel/accel.sh@42 -- # jq -r . 00:13:44.211 [2024-10-01 12:33:26.587410] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:44.211 [2024-10-01 12:33:26.587690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107547 ] 00:13:44.470 [2024-10-01 12:33:26.753039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.470 [2024-10-01 12:33:26.991134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.729 12:33:27 -- accel/accel.sh@21 -- # val= 00:13:44.729 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.729 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.729 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.729 12:33:27 -- accel/accel.sh@21 -- # val= 00:13:44.729 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.729 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.729 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.729 12:33:27 -- accel/accel.sh@21 -- # val=0x1 00:13:44.729 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.729 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.729 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.729 12:33:27 -- accel/accel.sh@21 -- # val= 00:13:44.729 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.729 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.729 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.729 12:33:27 -- accel/accel.sh@21 -- # val= 00:13:44.729 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.729 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.729 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val=copy_crc32c 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val=0 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val='8192 bytes' 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val= 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val=software 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@23 -- # accel_module=software 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val=32 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val=32 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val=1 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val=Yes 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val= 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:44.730 12:33:27 -- accel/accel.sh@21 -- # val= 00:13:44.730 12:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # IFS=: 00:13:44.730 12:33:27 -- accel/accel.sh@20 -- # read -r var val 00:13:47.264 12:33:29 -- accel/accel.sh@21 -- # val= 00:13:47.264 12:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # IFS=: 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # read -r var val 00:13:47.264 12:33:29 -- accel/accel.sh@21 -- # val= 00:13:47.264 12:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # IFS=: 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # read -r var val 00:13:47.264 12:33:29 -- accel/accel.sh@21 -- # val= 00:13:47.264 12:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # IFS=: 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # read -r var val 00:13:47.264 12:33:29 -- accel/accel.sh@21 -- # val= 00:13:47.264 12:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # IFS=: 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # read -r var val 00:13:47.264 12:33:29 -- accel/accel.sh@21 -- # val= 00:13:47.264 12:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # IFS=: 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # read -r var val 00:13:47.264 12:33:29 -- accel/accel.sh@21 -- # val= 00:13:47.264 12:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # IFS=: 00:13:47.264 12:33:29 -- accel/accel.sh@20 -- # read -r var val 00:13:47.264 ************************************ 00:13:47.264 END TEST accel_copy_crc32c_C2 00:13:47.264 ************************************ 00:13:47.264 12:33:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:47.264 12:33:29 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:13:47.264 12:33:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:47.264 00:13:47.264 real 0m5.596s 00:13:47.264 user 0m5.012s 00:13:47.264 sys 0m0.384s 00:13:47.264 12:33:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.264 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:47.264 12:33:29 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:13:47.264 12:33:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:47.264 12:33:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.264 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:47.264 ************************************ 00:13:47.264 START TEST accel_dualcast 00:13:47.264 ************************************ 00:13:47.264 12:33:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:13:47.264 12:33:29 -- accel/accel.sh@16 -- # local accel_opc 00:13:47.264 12:33:29 -- accel/accel.sh@17 -- # local accel_module 00:13:47.264 12:33:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:13:47.264 12:33:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:13:47.264 12:33:29 -- accel/accel.sh@12 -- # build_accel_config 00:13:47.264 12:33:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:47.264 12:33:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:47.264 12:33:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:47.264 12:33:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:47.264 12:33:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:47.264 12:33:29 -- accel/accel.sh@41 -- # local IFS=, 00:13:47.264 12:33:29 -- accel/accel.sh@42 -- # jq -r . 00:13:47.264 [2024-10-01 12:33:29.484844] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:47.264 [2024-10-01 12:33:29.485078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107604 ] 00:13:47.264 [2024-10-01 12:33:29.650351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.524 [2024-10-01 12:33:29.852090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.058 12:33:32 -- accel/accel.sh@18 -- # out=' 00:13:50.058 SPDK Configuration: 00:13:50.058 Core mask: 0x1 00:13:50.058 00:13:50.058 Accel Perf Configuration: 00:13:50.058 Workload Type: dualcast 00:13:50.058 Transfer size: 4096 bytes 00:13:50.058 Vector count 1 00:13:50.058 Module: software 00:13:50.058 Queue depth: 32 00:13:50.058 Allocate depth: 32 00:13:50.058 # threads/core: 1 00:13:50.058 Run time: 1 seconds 00:13:50.058 Verify: Yes 00:13:50.058 00:13:50.058 Running for 1 seconds... 00:13:50.058 00:13:50.058 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:50.058 ------------------------------------------------------------------------------------ 00:13:50.058 0,0 393728/s 1538 MiB/s 0 0 00:13:50.058 ==================================================================================== 00:13:50.058 Total 393728/s 1538 MiB/s 0 0' 00:13:50.058 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.058 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.058 12:33:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:13:50.058 12:33:32 -- accel/accel.sh@12 -- # build_accel_config 00:13:50.058 12:33:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:50.058 12:33:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:50.058 12:33:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:50.058 12:33:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:50.058 12:33:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:50.058 12:33:32 -- accel/accel.sh@41 -- # local IFS=, 00:13:50.058 12:33:32 -- accel/accel.sh@42 -- # jq -r . 00:13:50.058 12:33:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:13:50.058 [2024-10-01 12:33:32.232209] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:50.058 [2024-10-01 12:33:32.232489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107647 ] 00:13:50.058 [2024-10-01 12:33:32.397331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.331 [2024-10-01 12:33:32.637478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.589 12:33:32 -- accel/accel.sh@21 -- # val= 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val= 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val=0x1 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val= 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val= 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val=dualcast 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val= 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val=software 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@23 -- # accel_module=software 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val=32 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val=32 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val=1 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val=Yes 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val= 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:50.590 12:33:32 -- accel/accel.sh@21 -- # val= 00:13:50.590 12:33:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # IFS=: 00:13:50.590 12:33:32 -- accel/accel.sh@20 -- # read -r var val 00:13:52.496 12:33:34 -- accel/accel.sh@21 -- # val= 00:13:52.496 12:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # IFS=: 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # read -r var val 00:13:52.496 12:33:34 -- accel/accel.sh@21 -- # val= 00:13:52.496 12:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # IFS=: 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # read -r var val 00:13:52.496 12:33:34 -- accel/accel.sh@21 -- # val= 00:13:52.496 12:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # IFS=: 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # read -r var val 00:13:52.496 12:33:34 -- accel/accel.sh@21 -- # val= 00:13:52.496 12:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # IFS=: 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # read -r var val 00:13:52.496 12:33:34 -- accel/accel.sh@21 -- # val= 00:13:52.496 12:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # IFS=: 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # read -r var val 00:13:52.496 12:33:34 -- accel/accel.sh@21 -- # val= 00:13:52.496 12:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # IFS=: 00:13:52.496 12:33:34 -- accel/accel.sh@20 -- # read -r var val 00:13:52.496 12:33:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:52.496 12:33:34 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:13:52.496 12:33:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:52.496 00:13:52.496 real 0m5.560s 00:13:52.496 user 0m4.996s 00:13:52.496 sys 0m0.369s 00:13:52.496 12:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.496 12:33:34 -- common/autotest_common.sh@10 -- # set +x 00:13:52.496 ************************************ 00:13:52.496 END TEST accel_dualcast 00:13:52.496 ************************************ 00:13:52.755 12:33:35 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:13:52.755 12:33:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:52.755 12:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.755 12:33:35 -- common/autotest_common.sh@10 -- # set +x 00:13:52.755 ************************************ 00:13:52.755 START TEST accel_compare 00:13:52.755 ************************************ 00:13:52.755 12:33:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:13:52.755 12:33:35 -- accel/accel.sh@16 -- # local accel_opc 00:13:52.755 12:33:35 -- accel/accel.sh@17 -- # local accel_module 00:13:52.755 12:33:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:13:52.755 12:33:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:13:52.755 12:33:35 -- accel/accel.sh@12 -- # build_accel_config 00:13:52.755 12:33:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:52.755 12:33:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:52.755 12:33:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:52.755 12:33:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:52.755 12:33:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:52.755 12:33:35 -- accel/accel.sh@41 -- # local IFS=, 00:13:52.755 12:33:35 -- accel/accel.sh@42 -- # jq -r . 00:13:52.755 [2024-10-01 12:33:35.115262] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:52.755 [2024-10-01 12:33:35.115403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107705 ] 00:13:52.755 [2024-10-01 12:33:35.279013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.014 [2024-10-01 12:33:35.479499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.548 12:33:37 -- accel/accel.sh@18 -- # out=' 00:13:55.548 SPDK Configuration: 00:13:55.548 Core mask: 0x1 00:13:55.548 00:13:55.548 Accel Perf Configuration: 00:13:55.548 Workload Type: compare 00:13:55.548 Transfer size: 4096 bytes 00:13:55.548 Vector count 1 00:13:55.548 Module: software 00:13:55.548 Queue depth: 32 00:13:55.548 Allocate depth: 32 00:13:55.548 # threads/core: 1 00:13:55.548 Run time: 1 seconds 00:13:55.548 Verify: Yes 00:13:55.548 00:13:55.548 Running for 1 seconds... 00:13:55.548 00:13:55.548 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:55.548 ------------------------------------------------------------------------------------ 00:13:55.548 0,0 552288/s 2157 MiB/s 0 0 00:13:55.548 ==================================================================================== 00:13:55.548 Total 552288/s 2157 MiB/s 0 0' 00:13:55.548 12:33:37 -- accel/accel.sh@20 -- # IFS=: 00:13:55.548 12:33:37 -- accel/accel.sh@20 -- # read -r var val 00:13:55.548 12:33:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:13:55.548 12:33:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:13:55.548 12:33:37 -- accel/accel.sh@12 -- # build_accel_config 00:13:55.548 12:33:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:55.548 12:33:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:55.548 12:33:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:55.548 12:33:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:55.548 12:33:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:55.548 12:33:37 -- accel/accel.sh@41 -- # local IFS=, 00:13:55.548 12:33:37 -- accel/accel.sh@42 -- # jq -r . 00:13:55.548 [2024-10-01 12:33:37.846785] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:55.548 [2024-10-01 12:33:37.846920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107745 ] 00:13:55.548 [2024-10-01 12:33:38.012552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.807 [2024-10-01 12:33:38.247622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val= 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val= 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val=0x1 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val= 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val= 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val=compare 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@24 -- # accel_opc=compare 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val= 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val=software 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@23 -- # accel_module=software 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val=32 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val=32 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val=1 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val=Yes 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val= 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:56.067 12:33:38 -- accel/accel.sh@21 -- # val= 00:13:56.067 12:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # IFS=: 00:13:56.067 12:33:38 -- accel/accel.sh@20 -- # read -r var val 00:13:58.603 12:33:40 -- accel/accel.sh@21 -- # val= 00:13:58.603 12:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # IFS=: 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # read -r var val 00:13:58.603 12:33:40 -- accel/accel.sh@21 -- # val= 00:13:58.603 12:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # IFS=: 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # read -r var val 00:13:58.603 12:33:40 -- accel/accel.sh@21 -- # val= 00:13:58.603 12:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # IFS=: 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # read -r var val 00:13:58.603 12:33:40 -- accel/accel.sh@21 -- # val= 00:13:58.603 12:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # IFS=: 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # read -r var val 00:13:58.603 12:33:40 -- accel/accel.sh@21 -- # val= 00:13:58.603 12:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # IFS=: 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # read -r var val 00:13:58.603 12:33:40 -- accel/accel.sh@21 -- # val= 00:13:58.603 12:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # IFS=: 00:13:58.603 12:33:40 -- accel/accel.sh@20 -- # read -r var val 00:13:58.603 12:33:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:58.603 12:33:40 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:13:58.603 12:33:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:58.603 00:13:58.603 real 0m5.530s 00:13:58.603 user 0m4.981s 00:13:58.603 sys 0m0.361s 00:13:58.603 12:33:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.603 ************************************ 00:13:58.603 END TEST accel_compare 00:13:58.603 ************************************ 00:13:58.603 12:33:40 -- common/autotest_common.sh@10 -- # set +x 00:13:58.603 12:33:40 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:13:58.603 12:33:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:58.603 12:33:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:58.603 12:33:40 -- common/autotest_common.sh@10 -- # set +x 00:13:58.603 ************************************ 00:13:58.603 START TEST accel_xor 00:13:58.603 ************************************ 00:13:58.603 12:33:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:13:58.603 12:33:40 -- accel/accel.sh@16 -- # local accel_opc 00:13:58.603 12:33:40 -- accel/accel.sh@17 -- # local accel_module 00:13:58.603 12:33:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:13:58.603 12:33:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:13:58.603 12:33:40 -- accel/accel.sh@12 -- # build_accel_config 00:13:58.603 12:33:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:58.603 12:33:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:58.603 12:33:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:58.603 12:33:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:58.603 12:33:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:58.603 12:33:40 -- accel/accel.sh@41 -- # local IFS=, 00:13:58.603 12:33:40 -- accel/accel.sh@42 -- # jq -r . 00:13:58.603 [2024-10-01 12:33:40.720473] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:58.603 [2024-10-01 12:33:40.720605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107797 ] 00:13:58.603 [2024-10-01 12:33:40.884004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.603 [2024-10-01 12:33:41.082442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.139 12:33:43 -- accel/accel.sh@18 -- # out=' 00:14:01.139 SPDK Configuration: 00:14:01.139 Core mask: 0x1 00:14:01.139 00:14:01.139 Accel Perf Configuration: 00:14:01.139 Workload Type: xor 00:14:01.139 Source buffers: 2 00:14:01.139 Transfer size: 4096 bytes 00:14:01.139 Vector count 1 00:14:01.139 Module: software 00:14:01.139 Queue depth: 32 00:14:01.139 Allocate depth: 32 00:14:01.139 # threads/core: 1 00:14:01.139 Run time: 1 seconds 00:14:01.139 Verify: Yes 00:14:01.139 00:14:01.139 Running for 1 seconds... 00:14:01.139 00:14:01.139 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:01.139 ------------------------------------------------------------------------------------ 00:14:01.139 0,0 411744/s 1608 MiB/s 0 0 00:14:01.139 ==================================================================================== 00:14:01.139 Total 411744/s 1608 MiB/s 0 0' 00:14:01.139 12:33:43 -- accel/accel.sh@20 -- # IFS=: 00:14:01.139 12:33:43 -- accel/accel.sh@20 -- # read -r var val 00:14:01.139 12:33:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:14:01.139 12:33:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:14:01.139 12:33:43 -- accel/accel.sh@12 -- # build_accel_config 00:14:01.139 12:33:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:01.139 12:33:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:01.139 12:33:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:01.139 12:33:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:01.139 12:33:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:01.139 12:33:43 -- accel/accel.sh@41 -- # local IFS=, 00:14:01.139 12:33:43 -- accel/accel.sh@42 -- # jq -r . 00:14:01.139 [2024-10-01 12:33:43.433571] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:01.139 [2024-10-01 12:33:43.433704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107844 ] 00:14:01.139 [2024-10-01 12:33:43.597858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.398 [2024-10-01 12:33:43.815813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val= 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val= 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val=0x1 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val= 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val= 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val=xor 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@24 -- # accel_opc=xor 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val=2 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val= 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val=software 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@23 -- # accel_module=software 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val=32 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val=32 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val=1 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val=Yes 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val= 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:01.659 12:33:44 -- accel/accel.sh@21 -- # val= 00:14:01.659 12:33:44 -- accel/accel.sh@22 -- # case "$var" in 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # IFS=: 00:14:01.659 12:33:44 -- accel/accel.sh@20 -- # read -r var val 00:14:04.196 12:33:46 -- accel/accel.sh@21 -- # val= 00:14:04.196 12:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # IFS=: 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # read -r var val 00:14:04.196 12:33:46 -- accel/accel.sh@21 -- # val= 00:14:04.196 12:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # IFS=: 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # read -r var val 00:14:04.196 12:33:46 -- accel/accel.sh@21 -- # val= 00:14:04.196 12:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # IFS=: 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # read -r var val 00:14:04.196 12:33:46 -- accel/accel.sh@21 -- # val= 00:14:04.196 12:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # IFS=: 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # read -r var val 00:14:04.196 12:33:46 -- accel/accel.sh@21 -- # val= 00:14:04.196 12:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # IFS=: 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # read -r var val 00:14:04.196 12:33:46 -- accel/accel.sh@21 -- # val= 00:14:04.196 12:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # IFS=: 00:14:04.196 12:33:46 -- accel/accel.sh@20 -- # read -r var val 00:14:04.196 12:33:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:04.196 12:33:46 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:14:04.196 12:33:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:04.196 00:14:04.196 real 0m5.485s 00:14:04.196 user 0m4.942s 00:14:04.196 sys 0m0.361s 00:14:04.196 ************************************ 00:14:04.196 END TEST accel_xor 00:14:04.196 ************************************ 00:14:04.196 12:33:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.196 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.196 12:33:46 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:14:04.196 12:33:46 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:14:04.196 12:33:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:04.196 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.196 ************************************ 00:14:04.196 START TEST accel_xor 00:14:04.196 ************************************ 00:14:04.196 12:33:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:14:04.196 12:33:46 -- accel/accel.sh@16 -- # local accel_opc 00:14:04.196 12:33:46 -- accel/accel.sh@17 -- # local accel_module 00:14:04.196 12:33:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:14:04.196 12:33:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:14:04.196 12:33:46 -- accel/accel.sh@12 -- # build_accel_config 00:14:04.196 12:33:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:04.196 12:33:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:04.196 12:33:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:04.196 12:33:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:04.196 12:33:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:04.196 12:33:46 -- accel/accel.sh@41 -- # local IFS=, 00:14:04.196 12:33:46 -- accel/accel.sh@42 -- # jq -r . 00:14:04.196 [2024-10-01 12:33:46.279346] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:04.196 [2024-10-01 12:33:46.279497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107898 ] 00:14:04.196 [2024-10-01 12:33:46.443532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.196 [2024-10-01 12:33:46.635422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.734 12:33:48 -- accel/accel.sh@18 -- # out=' 00:14:06.734 SPDK Configuration: 00:14:06.734 Core mask: 0x1 00:14:06.734 00:14:06.734 Accel Perf Configuration: 00:14:06.734 Workload Type: xor 00:14:06.734 Source buffers: 3 00:14:06.734 Transfer size: 4096 bytes 00:14:06.734 Vector count 1 00:14:06.734 Module: software 00:14:06.734 Queue depth: 32 00:14:06.734 Allocate depth: 32 00:14:06.734 # threads/core: 1 00:14:06.734 Run time: 1 seconds 00:14:06.734 Verify: Yes 00:14:06.734 00:14:06.734 Running for 1 seconds... 00:14:06.734 00:14:06.734 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:06.734 ------------------------------------------------------------------------------------ 00:14:06.734 0,0 392512/s 1533 MiB/s 0 0 00:14:06.734 ==================================================================================== 00:14:06.734 Total 392512/s 1533 MiB/s 0 0' 00:14:06.734 12:33:48 -- accel/accel.sh@20 -- # IFS=: 00:14:06.734 12:33:48 -- accel/accel.sh@20 -- # read -r var val 00:14:06.734 12:33:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:14:06.734 12:33:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:14:06.734 12:33:48 -- accel/accel.sh@12 -- # build_accel_config 00:14:06.734 12:33:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:06.734 12:33:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:06.734 12:33:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:06.734 12:33:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:06.734 12:33:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:06.734 12:33:48 -- accel/accel.sh@41 -- # local IFS=, 00:14:06.734 12:33:48 -- accel/accel.sh@42 -- # jq -r . 00:14:06.734 [2024-10-01 12:33:48.974596] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:06.734 [2024-10-01 12:33:48.974713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107936 ] 00:14:06.734 [2024-10-01 12:33:49.140061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.993 [2024-10-01 12:33:49.368360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val= 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val= 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val=0x1 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val= 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val= 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val=xor 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val=3 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val= 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val=software 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@23 -- # accel_module=software 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val=32 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val=32 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val=1 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val=Yes 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val= 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:07.253 12:33:49 -- accel/accel.sh@21 -- # val= 00:14:07.253 12:33:49 -- accel/accel.sh@22 -- # case "$var" in 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # IFS=: 00:14:07.253 12:33:49 -- accel/accel.sh@20 -- # read -r var val 00:14:09.160 12:33:51 -- accel/accel.sh@21 -- # val= 00:14:09.160 12:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # IFS=: 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # read -r var val 00:14:09.160 12:33:51 -- accel/accel.sh@21 -- # val= 00:14:09.160 12:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # IFS=: 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # read -r var val 00:14:09.160 12:33:51 -- accel/accel.sh@21 -- # val= 00:14:09.160 12:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # IFS=: 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # read -r var val 00:14:09.160 12:33:51 -- accel/accel.sh@21 -- # val= 00:14:09.160 12:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # IFS=: 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # read -r var val 00:14:09.160 12:33:51 -- accel/accel.sh@21 -- # val= 00:14:09.160 12:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # IFS=: 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # read -r var val 00:14:09.160 12:33:51 -- accel/accel.sh@21 -- # val= 00:14:09.160 12:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # IFS=: 00:14:09.160 12:33:51 -- accel/accel.sh@20 -- # read -r var val 00:14:09.419 12:33:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:09.419 12:33:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:14:09.419 12:33:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:09.419 00:14:09.419 real 0m5.459s 00:14:09.419 user 0m4.911s 00:14:09.419 sys 0m0.364s 00:14:09.419 12:33:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.419 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:14:09.419 ************************************ 00:14:09.419 END TEST accel_xor 00:14:09.419 ************************************ 00:14:09.419 12:33:51 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:14:09.419 12:33:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:14:09.419 12:33:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:09.419 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:14:09.419 ************************************ 00:14:09.419 START TEST accel_dif_verify 00:14:09.419 ************************************ 00:14:09.419 12:33:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:14:09.419 12:33:51 -- accel/accel.sh@16 -- # local accel_opc 00:14:09.419 12:33:51 -- accel/accel.sh@17 -- # local accel_module 00:14:09.419 12:33:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:14:09.419 12:33:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:14:09.419 12:33:51 -- accel/accel.sh@12 -- # build_accel_config 00:14:09.419 12:33:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:09.419 12:33:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:09.419 12:33:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:09.419 12:33:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:09.419 12:33:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:09.419 12:33:51 -- accel/accel.sh@41 -- # local IFS=, 00:14:09.419 12:33:51 -- accel/accel.sh@42 -- # jq -r . 00:14:09.419 [2024-10-01 12:33:51.816285] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:09.419 [2024-10-01 12:33:51.816419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107988 ] 00:14:09.679 [2024-10-01 12:33:51.981709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.679 [2024-10-01 12:33:52.170070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.216 12:33:54 -- accel/accel.sh@18 -- # out=' 00:14:12.216 SPDK Configuration: 00:14:12.216 Core mask: 0x1 00:14:12.216 00:14:12.216 Accel Perf Configuration: 00:14:12.216 Workload Type: dif_verify 00:14:12.216 Vector size: 4096 bytes 00:14:12.216 Transfer size: 4096 bytes 00:14:12.216 Block size: 512 bytes 00:14:12.216 Metadata size: 8 bytes 00:14:12.216 Vector count 1 00:14:12.216 Module: software 00:14:12.216 Queue depth: 32 00:14:12.216 Allocate depth: 32 00:14:12.216 # threads/core: 1 00:14:12.216 Run time: 1 seconds 00:14:12.216 Verify: No 00:14:12.216 00:14:12.216 Running for 1 seconds... 00:14:12.216 00:14:12.216 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:12.216 ------------------------------------------------------------------------------------ 00:14:12.216 0,0 129088/s 512 MiB/s 0 0 00:14:12.216 ==================================================================================== 00:14:12.216 Total 129088/s 504 MiB/s 0 0' 00:14:12.216 12:33:54 -- accel/accel.sh@20 -- # IFS=: 00:14:12.216 12:33:54 -- accel/accel.sh@20 -- # read -r var val 00:14:12.216 12:33:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:14:12.216 12:33:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:14:12.216 12:33:54 -- accel/accel.sh@12 -- # build_accel_config 00:14:12.216 12:33:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:12.216 12:33:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:12.216 12:33:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:12.216 12:33:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:12.216 12:33:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:12.216 12:33:54 -- accel/accel.sh@41 -- # local IFS=, 00:14:12.216 12:33:54 -- accel/accel.sh@42 -- # jq -r . 00:14:12.216 [2024-10-01 12:33:54.506525] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:12.216 [2024-10-01 12:33:54.506642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108035 ] 00:14:12.216 [2024-10-01 12:33:54.669610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.476 [2024-10-01 12:33:54.891951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val= 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val= 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val=0x1 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val= 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val= 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val=dif_verify 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val='512 bytes' 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val='8 bytes' 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val= 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val=software 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@23 -- # accel_module=software 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val=32 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.736 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.736 12:33:55 -- accel/accel.sh@21 -- # val=32 00:14:12.736 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.737 12:33:55 -- accel/accel.sh@21 -- # val=1 00:14:12.737 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.737 12:33:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:12.737 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.737 12:33:55 -- accel/accel.sh@21 -- # val=No 00:14:12.737 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.737 12:33:55 -- accel/accel.sh@21 -- # val= 00:14:12.737 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:12.737 12:33:55 -- accel/accel.sh@21 -- # val= 00:14:12.737 12:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # IFS=: 00:14:12.737 12:33:55 -- accel/accel.sh@20 -- # read -r var val 00:14:14.733 12:33:57 -- accel/accel.sh@21 -- # val= 00:14:14.733 12:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # IFS=: 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # read -r var val 00:14:14.733 12:33:57 -- accel/accel.sh@21 -- # val= 00:14:14.733 12:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # IFS=: 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # read -r var val 00:14:14.733 12:33:57 -- accel/accel.sh@21 -- # val= 00:14:14.733 12:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # IFS=: 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # read -r var val 00:14:14.733 12:33:57 -- accel/accel.sh@21 -- # val= 00:14:14.733 12:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # IFS=: 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # read -r var val 00:14:14.733 12:33:57 -- accel/accel.sh@21 -- # val= 00:14:14.733 12:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # IFS=: 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # read -r var val 00:14:14.733 12:33:57 -- accel/accel.sh@21 -- # val= 00:14:14.733 12:33:57 -- accel/accel.sh@22 -- # case "$var" in 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # IFS=: 00:14:14.733 12:33:57 -- accel/accel.sh@20 -- # read -r var val 00:14:14.733 12:33:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:14.733 12:33:57 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:14:14.733 12:33:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:14.733 00:14:14.733 real 0m5.447s 00:14:14.733 user 0m4.890s 00:14:14.733 sys 0m0.377s 00:14:14.733 12:33:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.733 ************************************ 00:14:14.733 END TEST accel_dif_verify 00:14:14.733 ************************************ 00:14:14.733 12:33:57 -- common/autotest_common.sh@10 -- # set +x 00:14:14.993 12:33:57 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:14:14.993 12:33:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:14:14.993 12:33:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:14.993 12:33:57 -- common/autotest_common.sh@10 -- # set +x 00:14:14.993 ************************************ 00:14:14.993 START TEST accel_dif_generate 00:14:14.993 ************************************ 00:14:14.993 12:33:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:14:14.993 12:33:57 -- accel/accel.sh@16 -- # local accel_opc 00:14:14.993 12:33:57 -- accel/accel.sh@17 -- # local accel_module 00:14:14.993 12:33:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:14:14.993 12:33:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:14:14.993 12:33:57 -- accel/accel.sh@12 -- # build_accel_config 00:14:14.993 12:33:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:14.993 12:33:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:14.993 12:33:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:14.993 12:33:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:14.993 12:33:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:14.993 12:33:57 -- accel/accel.sh@41 -- # local IFS=, 00:14:14.993 12:33:57 -- accel/accel.sh@42 -- # jq -r . 00:14:14.993 [2024-10-01 12:33:57.335772] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:14.993 [2024-10-01 12:33:57.336044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108082 ] 00:14:14.993 [2024-10-01 12:33:57.500319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.253 [2024-10-01 12:33:57.690280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.788 12:33:59 -- accel/accel.sh@18 -- # out=' 00:14:17.788 SPDK Configuration: 00:14:17.788 Core mask: 0x1 00:14:17.788 00:14:17.788 Accel Perf Configuration: 00:14:17.788 Workload Type: dif_generate 00:14:17.788 Vector size: 4096 bytes 00:14:17.788 Transfer size: 4096 bytes 00:14:17.788 Block size: 512 bytes 00:14:17.788 Metadata size: 8 bytes 00:14:17.788 Vector count 1 00:14:17.788 Module: software 00:14:17.788 Queue depth: 32 00:14:17.788 Allocate depth: 32 00:14:17.788 # threads/core: 1 00:14:17.788 Run time: 1 seconds 00:14:17.788 Verify: No 00:14:17.788 00:14:17.788 Running for 1 seconds... 00:14:17.788 00:14:17.788 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:17.788 ------------------------------------------------------------------------------------ 00:14:17.788 0,0 155168/s 615 MiB/s 0 0 00:14:17.788 ==================================================================================== 00:14:17.788 Total 155168/s 606 MiB/s 0 0' 00:14:17.788 12:33:59 -- accel/accel.sh@20 -- # IFS=: 00:14:17.788 12:33:59 -- accel/accel.sh@20 -- # read -r var val 00:14:17.788 12:33:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:14:17.788 12:33:59 -- accel/accel.sh@12 -- # build_accel_config 00:14:17.788 12:33:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:14:17.788 12:33:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:17.788 12:33:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:17.788 12:33:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:17.788 12:33:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:17.788 12:33:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:17.788 12:33:59 -- accel/accel.sh@41 -- # local IFS=, 00:14:17.788 12:33:59 -- accel/accel.sh@42 -- # jq -r . 00:14:17.788 [2024-10-01 12:34:00.020739] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:17.788 [2024-10-01 12:34:00.020876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108129 ] 00:14:17.788 [2024-10-01 12:34:00.186062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.046 [2024-10-01 12:34:00.417454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val= 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val= 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val=0x1 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val= 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val= 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val=dif_generate 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val='512 bytes' 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val='8 bytes' 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val= 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val=software 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@23 -- # accel_module=software 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val=32 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val=32 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val=1 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val=No 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val= 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:18.306 12:34:00 -- accel/accel.sh@21 -- # val= 00:14:18.306 12:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # IFS=: 00:14:18.306 12:34:00 -- accel/accel.sh@20 -- # read -r var val 00:14:20.208 12:34:02 -- accel/accel.sh@21 -- # val= 00:14:20.208 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:14:20.208 12:34:02 -- accel/accel.sh@21 -- # val= 00:14:20.208 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:14:20.208 12:34:02 -- accel/accel.sh@21 -- # val= 00:14:20.208 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:14:20.208 12:34:02 -- accel/accel.sh@21 -- # val= 00:14:20.208 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:14:20.208 12:34:02 -- accel/accel.sh@21 -- # val= 00:14:20.208 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:14:20.208 12:34:02 -- accel/accel.sh@21 -- # val= 00:14:20.208 12:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # IFS=: 00:14:20.208 12:34:02 -- accel/accel.sh@20 -- # read -r var val 00:14:20.466 12:34:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:20.466 12:34:02 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:14:20.466 ************************************ 00:14:20.466 END TEST accel_dif_generate 00:14:20.466 ************************************ 00:14:20.466 12:34:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:20.466 00:14:20.466 real 0m5.454s 00:14:20.466 user 0m4.924s 00:14:20.466 sys 0m0.336s 00:14:20.466 12:34:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.466 12:34:02 -- common/autotest_common.sh@10 -- # set +x 00:14:20.466 12:34:02 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:14:20.466 12:34:02 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:14:20.466 12:34:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:20.466 12:34:02 -- common/autotest_common.sh@10 -- # set +x 00:14:20.466 ************************************ 00:14:20.466 START TEST accel_dif_generate_copy 00:14:20.466 ************************************ 00:14:20.466 12:34:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:14:20.466 12:34:02 -- accel/accel.sh@16 -- # local accel_opc 00:14:20.466 12:34:02 -- accel/accel.sh@17 -- # local accel_module 00:14:20.466 12:34:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:14:20.466 12:34:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:14:20.466 12:34:02 -- accel/accel.sh@12 -- # build_accel_config 00:14:20.466 12:34:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:20.466 12:34:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:20.466 12:34:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:20.466 12:34:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:20.466 12:34:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:20.466 12:34:02 -- accel/accel.sh@41 -- # local IFS=, 00:14:20.466 12:34:02 -- accel/accel.sh@42 -- # jq -r . 00:14:20.466 [2024-10-01 12:34:02.888156] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:20.467 [2024-10-01 12:34:02.888747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108181 ] 00:14:20.726 [2024-10-01 12:34:03.060335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.726 [2024-10-01 12:34:03.248888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.258 12:34:05 -- accel/accel.sh@18 -- # out=' 00:14:23.258 SPDK Configuration: 00:14:23.258 Core mask: 0x1 00:14:23.258 00:14:23.258 Accel Perf Configuration: 00:14:23.258 Workload Type: dif_generate_copy 00:14:23.258 Vector size: 4096 bytes 00:14:23.258 Transfer size: 4096 bytes 00:14:23.258 Vector count 1 00:14:23.258 Module: software 00:14:23.258 Queue depth: 32 00:14:23.258 Allocate depth: 32 00:14:23.258 # threads/core: 1 00:14:23.258 Run time: 1 seconds 00:14:23.258 Verify: No 00:14:23.258 00:14:23.258 Running for 1 seconds... 00:14:23.258 00:14:23.258 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:23.258 ------------------------------------------------------------------------------------ 00:14:23.258 0,0 120288/s 477 MiB/s 0 0 00:14:23.258 ==================================================================================== 00:14:23.258 Total 120288/s 469 MiB/s 0 0' 00:14:23.258 12:34:05 -- accel/accel.sh@20 -- # IFS=: 00:14:23.258 12:34:05 -- accel/accel.sh@20 -- # read -r var val 00:14:23.258 12:34:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:14:23.258 12:34:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:14:23.258 12:34:05 -- accel/accel.sh@12 -- # build_accel_config 00:14:23.258 12:34:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:23.258 12:34:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:23.258 12:34:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:23.258 12:34:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:23.258 12:34:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:23.258 12:34:05 -- accel/accel.sh@41 -- # local IFS=, 00:14:23.258 12:34:05 -- accel/accel.sh@42 -- # jq -r . 00:14:23.258 [2024-10-01 12:34:05.589386] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:23.258 [2024-10-01 12:34:05.589505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108224 ] 00:14:23.258 [2024-10-01 12:34:05.754088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.516 [2024-10-01 12:34:05.985596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val= 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val= 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val=0x1 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val= 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val= 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val= 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val=software 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@23 -- # accel_module=software 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val=32 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val=32 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val=1 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val=No 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val= 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:23.775 12:34:06 -- accel/accel.sh@21 -- # val= 00:14:23.775 12:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # IFS=: 00:14:23.775 12:34:06 -- accel/accel.sh@20 -- # read -r var val 00:14:26.310 12:34:08 -- accel/accel.sh@21 -- # val= 00:14:26.310 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:14:26.310 12:34:08 -- accel/accel.sh@21 -- # val= 00:14:26.310 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:14:26.310 12:34:08 -- accel/accel.sh@21 -- # val= 00:14:26.310 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:14:26.310 12:34:08 -- accel/accel.sh@21 -- # val= 00:14:26.310 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:14:26.310 12:34:08 -- accel/accel.sh@21 -- # val= 00:14:26.310 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:14:26.310 12:34:08 -- accel/accel.sh@21 -- # val= 00:14:26.310 12:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # IFS=: 00:14:26.310 12:34:08 -- accel/accel.sh@20 -- # read -r var val 00:14:26.310 12:34:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:26.310 12:34:08 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:14:26.310 ************************************ 00:14:26.310 END TEST accel_dif_generate_copy 00:14:26.310 ************************************ 00:14:26.310 12:34:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:26.310 00:14:26.310 real 0m5.490s 00:14:26.310 user 0m4.894s 00:14:26.310 sys 0m0.397s 00:14:26.310 12:34:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.310 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:26.310 12:34:08 -- accel/accel.sh@107 -- # [[ y == y ]] 00:14:26.310 12:34:08 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:26.310 12:34:08 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:14:26.310 12:34:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:26.310 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:26.310 ************************************ 00:14:26.310 START TEST accel_comp 00:14:26.310 ************************************ 00:14:26.310 12:34:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:26.310 12:34:08 -- accel/accel.sh@16 -- # local accel_opc 00:14:26.310 12:34:08 -- accel/accel.sh@17 -- # local accel_module 00:14:26.310 12:34:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:26.310 12:34:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:26.310 12:34:08 -- accel/accel.sh@12 -- # build_accel_config 00:14:26.310 12:34:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:26.310 12:34:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:26.310 12:34:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:26.310 12:34:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:26.310 12:34:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:26.310 12:34:08 -- accel/accel.sh@41 -- # local IFS=, 00:14:26.310 12:34:08 -- accel/accel.sh@42 -- # jq -r . 00:14:26.310 [2024-10-01 12:34:08.431042] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:26.310 [2024-10-01 12:34:08.431196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108282 ] 00:14:26.310 [2024-10-01 12:34:08.596364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.310 [2024-10-01 12:34:08.790872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.846 12:34:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:28.846 00:14:28.846 SPDK Configuration: 00:14:28.846 Core mask: 0x1 00:14:28.846 00:14:28.846 Accel Perf Configuration: 00:14:28.846 Workload Type: compress 00:14:28.846 Transfer size: 4096 bytes 00:14:28.846 Vector count 1 00:14:28.846 Module: software 00:14:28.846 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:28.846 Queue depth: 32 00:14:28.846 Allocate depth: 32 00:14:28.846 # threads/core: 1 00:14:28.846 Run time: 1 seconds 00:14:28.846 Verify: No 00:14:28.846 00:14:28.846 Running for 1 seconds... 00:14:28.846 00:14:28.846 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:28.846 ------------------------------------------------------------------------------------ 00:14:28.846 0,0 58336/s 243 MiB/s 0 0 00:14:28.846 ==================================================================================== 00:14:28.846 Total 58336/s 227 MiB/s 0 0' 00:14:28.846 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:28.846 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:28.846 12:34:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:28.846 12:34:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:28.846 12:34:11 -- accel/accel.sh@12 -- # build_accel_config 00:14:28.846 12:34:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:28.846 12:34:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:28.846 12:34:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:28.846 12:34:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:28.846 12:34:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:28.846 12:34:11 -- accel/accel.sh@41 -- # local IFS=, 00:14:28.846 12:34:11 -- accel/accel.sh@42 -- # jq -r . 00:14:28.846 [2024-10-01 12:34:11.132932] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:28.846 [2024-10-01 12:34:11.133056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108323 ] 00:14:28.846 [2024-10-01 12:34:11.296062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.105 [2024-10-01 12:34:11.520237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val= 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val= 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val= 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val=0x1 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val= 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val= 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val=compress 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@24 -- # accel_opc=compress 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val= 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val=software 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@23 -- # accel_module=software 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val=32 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val=32 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val=1 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val=No 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val= 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:29.363 12:34:11 -- accel/accel.sh@21 -- # val= 00:14:29.363 12:34:11 -- accel/accel.sh@22 -- # case "$var" in 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # IFS=: 00:14:29.363 12:34:11 -- accel/accel.sh@20 -- # read -r var val 00:14:31.895 12:34:13 -- accel/accel.sh@21 -- # val= 00:14:31.895 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:14:31.895 12:34:13 -- accel/accel.sh@21 -- # val= 00:14:31.895 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:14:31.895 12:34:13 -- accel/accel.sh@21 -- # val= 00:14:31.895 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:14:31.895 12:34:13 -- accel/accel.sh@21 -- # val= 00:14:31.895 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:14:31.895 12:34:13 -- accel/accel.sh@21 -- # val= 00:14:31.895 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:14:31.895 12:34:13 -- accel/accel.sh@21 -- # val= 00:14:31.895 12:34:13 -- accel/accel.sh@22 -- # case "$var" in 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # IFS=: 00:14:31.895 12:34:13 -- accel/accel.sh@20 -- # read -r var val 00:14:31.895 12:34:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:31.895 12:34:13 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:14:31.895 12:34:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:31.895 00:14:31.895 real 0m5.464s 00:14:31.895 user 0m4.906s 00:14:31.895 sys 0m0.373s 00:14:31.895 ************************************ 00:14:31.895 END TEST accel_comp 00:14:31.895 ************************************ 00:14:31.895 12:34:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.895 12:34:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.896 12:34:13 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:31.896 12:34:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:14:31.896 12:34:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:31.896 12:34:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.896 ************************************ 00:14:31.896 START TEST accel_decomp 00:14:31.896 ************************************ 00:14:31.896 12:34:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:31.896 12:34:13 -- accel/accel.sh@16 -- # local accel_opc 00:14:31.896 12:34:13 -- accel/accel.sh@17 -- # local accel_module 00:14:31.896 12:34:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:31.896 12:34:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:31.896 12:34:13 -- accel/accel.sh@12 -- # build_accel_config 00:14:31.896 12:34:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:31.896 12:34:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:31.896 12:34:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:31.896 12:34:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:31.896 12:34:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:31.896 12:34:13 -- accel/accel.sh@41 -- # local IFS=, 00:14:31.896 12:34:13 -- accel/accel.sh@42 -- # jq -r . 00:14:31.896 [2024-10-01 12:34:13.960912] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:31.896 [2024-10-01 12:34:13.961040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108381 ] 00:14:31.896 [2024-10-01 12:34:14.124672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.896 [2024-10-01 12:34:14.310015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.434 12:34:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:34.434 00:14:34.434 SPDK Configuration: 00:14:34.434 Core mask: 0x1 00:14:34.434 00:14:34.434 Accel Perf Configuration: 00:14:34.434 Workload Type: decompress 00:14:34.434 Transfer size: 4096 bytes 00:14:34.434 Vector count 1 00:14:34.434 Module: software 00:14:34.434 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:34.434 Queue depth: 32 00:14:34.434 Allocate depth: 32 00:14:34.434 # threads/core: 1 00:14:34.434 Run time: 1 seconds 00:14:34.434 Verify: Yes 00:14:34.434 00:14:34.434 Running for 1 seconds... 00:14:34.434 00:14:34.434 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:34.434 ------------------------------------------------------------------------------------ 00:14:34.434 0,0 63808/s 117 MiB/s 0 0 00:14:34.434 ==================================================================================== 00:14:34.434 Total 63808/s 249 MiB/s 0 0' 00:14:34.434 12:34:16 -- accel/accel.sh@20 -- # IFS=: 00:14:34.434 12:34:16 -- accel/accel.sh@20 -- # read -r var val 00:14:34.434 12:34:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:34.434 12:34:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:34.434 12:34:16 -- accel/accel.sh@12 -- # build_accel_config 00:14:34.434 12:34:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:34.434 12:34:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:34.434 12:34:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:34.434 12:34:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:34.434 12:34:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:34.434 12:34:16 -- accel/accel.sh@41 -- # local IFS=, 00:14:34.434 12:34:16 -- accel/accel.sh@42 -- # jq -r . 00:14:34.434 [2024-10-01 12:34:16.648987] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:34.434 [2024-10-01 12:34:16.649112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108417 ] 00:14:34.434 [2024-10-01 12:34:16.811809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.693 [2024-10-01 12:34:17.041982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val= 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val= 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val= 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val=0x1 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val= 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val= 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val=decompress 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val= 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val=software 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@23 -- # accel_module=software 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val=32 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val=32 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val=1 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val=Yes 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val= 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:34.953 12:34:17 -- accel/accel.sh@21 -- # val= 00:14:34.953 12:34:17 -- accel/accel.sh@22 -- # case "$var" in 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # IFS=: 00:14:34.953 12:34:17 -- accel/accel.sh@20 -- # read -r var val 00:14:36.860 12:34:19 -- accel/accel.sh@21 -- # val= 00:14:36.860 12:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # IFS=: 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # read -r var val 00:14:36.860 12:34:19 -- accel/accel.sh@21 -- # val= 00:14:36.860 12:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # IFS=: 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # read -r var val 00:14:36.860 12:34:19 -- accel/accel.sh@21 -- # val= 00:14:36.860 12:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # IFS=: 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # read -r var val 00:14:36.860 12:34:19 -- accel/accel.sh@21 -- # val= 00:14:36.860 12:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # IFS=: 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # read -r var val 00:14:36.860 12:34:19 -- accel/accel.sh@21 -- # val= 00:14:36.860 12:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # IFS=: 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # read -r var val 00:14:36.860 12:34:19 -- accel/accel.sh@21 -- # val= 00:14:36.860 12:34:19 -- accel/accel.sh@22 -- # case "$var" in 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # IFS=: 00:14:36.860 12:34:19 -- accel/accel.sh@20 -- # read -r var val 00:14:36.860 12:34:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:36.860 12:34:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:36.860 12:34:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:36.860 00:14:36.860 real 0m5.452s 00:14:36.860 user 0m4.885s 00:14:36.860 sys 0m0.373s 00:14:36.860 12:34:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.860 ************************************ 00:14:36.860 END TEST accel_decomp 00:14:36.860 ************************************ 00:14:36.860 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.119 12:34:19 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:37.119 12:34:19 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:14:37.119 12:34:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:37.119 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.119 ************************************ 00:14:37.119 START TEST accel_decmop_full 00:14:37.119 ************************************ 00:14:37.119 12:34:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:37.119 12:34:19 -- accel/accel.sh@16 -- # local accel_opc 00:14:37.119 12:34:19 -- accel/accel.sh@17 -- # local accel_module 00:14:37.119 12:34:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:37.120 12:34:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:37.120 12:34:19 -- accel/accel.sh@12 -- # build_accel_config 00:14:37.120 12:34:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:37.120 12:34:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:37.120 12:34:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:37.120 12:34:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:37.120 12:34:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:37.120 12:34:19 -- accel/accel.sh@41 -- # local IFS=, 00:14:37.120 12:34:19 -- accel/accel.sh@42 -- # jq -r . 00:14:37.120 [2024-10-01 12:34:19.474434] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:37.120 [2024-10-01 12:34:19.474910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108474 ] 00:14:37.120 [2024-10-01 12:34:19.639941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.379 [2024-10-01 12:34:19.826657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.943 12:34:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:39.943 00:14:39.943 SPDK Configuration: 00:14:39.943 Core mask: 0x1 00:14:39.943 00:14:39.943 Accel Perf Configuration: 00:14:39.943 Workload Type: decompress 00:14:39.943 Transfer size: 111250 bytes 00:14:39.943 Vector count 1 00:14:39.943 Module: software 00:14:39.943 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:39.943 Queue depth: 32 00:14:39.943 Allocate depth: 32 00:14:39.943 # threads/core: 1 00:14:39.943 Run time: 1 seconds 00:14:39.943 Verify: Yes 00:14:39.943 00:14:39.943 Running for 1 seconds... 00:14:39.943 00:14:39.943 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:39.943 ------------------------------------------------------------------------------------ 00:14:39.943 0,0 4704/s 194 MiB/s 0 0 00:14:39.943 ==================================================================================== 00:14:39.943 Total 4704/s 499 MiB/s 0 0' 00:14:39.943 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:39.943 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:39.943 12:34:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:39.943 12:34:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:39.943 12:34:22 -- accel/accel.sh@12 -- # build_accel_config 00:14:39.943 12:34:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:39.943 12:34:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:39.943 12:34:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:39.943 12:34:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:39.943 12:34:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:39.943 12:34:22 -- accel/accel.sh@41 -- # local IFS=, 00:14:39.943 12:34:22 -- accel/accel.sh@42 -- # jq -r . 00:14:39.944 [2024-10-01 12:34:22.179059] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:39.944 [2024-10-01 12:34:22.179242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108515 ] 00:14:39.944 [2024-10-01 12:34:22.343631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.203 [2024-10-01 12:34:22.568187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val= 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val= 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val= 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val=0x1 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val= 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val= 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val=decompress 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val='111250 bytes' 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val= 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val=software 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@23 -- # accel_module=software 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val=32 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val=32 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val=1 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val=Yes 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val= 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:40.462 12:34:22 -- accel/accel.sh@21 -- # val= 00:14:40.462 12:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # IFS=: 00:14:40.462 12:34:22 -- accel/accel.sh@20 -- # read -r var val 00:14:42.369 12:34:24 -- accel/accel.sh@21 -- # val= 00:14:42.369 12:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # IFS=: 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # read -r var val 00:14:42.369 12:34:24 -- accel/accel.sh@21 -- # val= 00:14:42.369 12:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # IFS=: 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # read -r var val 00:14:42.369 12:34:24 -- accel/accel.sh@21 -- # val= 00:14:42.369 12:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # IFS=: 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # read -r var val 00:14:42.369 12:34:24 -- accel/accel.sh@21 -- # val= 00:14:42.369 12:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # IFS=: 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # read -r var val 00:14:42.369 12:34:24 -- accel/accel.sh@21 -- # val= 00:14:42.369 12:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # IFS=: 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # read -r var val 00:14:42.369 12:34:24 -- accel/accel.sh@21 -- # val= 00:14:42.369 12:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # IFS=: 00:14:42.369 12:34:24 -- accel/accel.sh@20 -- # read -r var val 00:14:42.629 12:34:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:42.629 12:34:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:42.629 12:34:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:42.629 00:14:42.629 real 0m5.484s 00:14:42.629 user 0m4.924s 00:14:42.629 sys 0m0.367s 00:14:42.629 12:34:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.629 ************************************ 00:14:42.629 END TEST accel_decmop_full 00:14:42.629 ************************************ 00:14:42.629 12:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:42.629 12:34:24 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:42.629 12:34:24 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:14:42.629 12:34:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:42.629 12:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:42.629 ************************************ 00:14:42.629 START TEST accel_decomp_mcore 00:14:42.629 ************************************ 00:14:42.629 12:34:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:42.629 12:34:24 -- accel/accel.sh@16 -- # local accel_opc 00:14:42.629 12:34:24 -- accel/accel.sh@17 -- # local accel_module 00:14:42.629 12:34:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:42.629 12:34:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:42.629 12:34:24 -- accel/accel.sh@12 -- # build_accel_config 00:14:42.629 12:34:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:42.629 12:34:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:42.629 12:34:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:42.629 12:34:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:42.629 12:34:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:42.629 12:34:24 -- accel/accel.sh@41 -- # local IFS=, 00:14:42.629 12:34:24 -- accel/accel.sh@42 -- # jq -r . 00:14:42.629 [2024-10-01 12:34:25.031204] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:42.629 [2024-10-01 12:34:25.031332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108566 ] 00:14:42.889 [2024-10-01 12:34:25.207744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.889 [2024-10-01 12:34:25.401145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.889 [2024-10-01 12:34:25.401356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.889 [2024-10-01 12:34:25.401520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.889 [2024-10-01 12:34:25.401537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.426 12:34:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:45.426 00:14:45.426 SPDK Configuration: 00:14:45.426 Core mask: 0xf 00:14:45.426 00:14:45.426 Accel Perf Configuration: 00:14:45.426 Workload Type: decompress 00:14:45.426 Transfer size: 4096 bytes 00:14:45.426 Vector count 1 00:14:45.426 Module: software 00:14:45.426 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:45.426 Queue depth: 32 00:14:45.426 Allocate depth: 32 00:14:45.426 # threads/core: 1 00:14:45.426 Run time: 1 seconds 00:14:45.426 Verify: Yes 00:14:45.426 00:14:45.426 Running for 1 seconds... 00:14:45.426 00:14:45.426 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:45.426 ------------------------------------------------------------------------------------ 00:14:45.426 0,0 54880/s 101 MiB/s 0 0 00:14:45.426 3,0 57920/s 106 MiB/s 0 0 00:14:45.426 2,0 56864/s 104 MiB/s 0 0 00:14:45.426 1,0 56608/s 104 MiB/s 0 0 00:14:45.426 ==================================================================================== 00:14:45.426 Total 226272/s 883 MiB/s 0 0' 00:14:45.426 12:34:27 -- accel/accel.sh@20 -- # IFS=: 00:14:45.426 12:34:27 -- accel/accel.sh@20 -- # read -r var val 00:14:45.426 12:34:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:45.426 12:34:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:45.426 12:34:27 -- accel/accel.sh@12 -- # build_accel_config 00:14:45.426 12:34:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:45.426 12:34:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:45.426 12:34:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:45.426 12:34:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:45.426 12:34:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:45.426 12:34:27 -- accel/accel.sh@41 -- # local IFS=, 00:14:45.426 12:34:27 -- accel/accel.sh@42 -- # jq -r . 00:14:45.426 [2024-10-01 12:34:27.799608] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:45.426 [2024-10-01 12:34:27.799747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108613 ] 00:14:45.685 [2024-10-01 12:34:27.975456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.685 [2024-10-01 12:34:28.205625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.685 [2024-10-01 12:34:28.205846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.685 [2024-10-01 12:34:28.205997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.685 [2024-10-01 12:34:28.206002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val= 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val= 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val= 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val=0xf 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val= 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val= 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val=decompress 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val= 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val=software 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@23 -- # accel_module=software 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val=32 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val=32 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val=1 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val=Yes 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val= 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:45.945 12:34:28 -- accel/accel.sh@21 -- # val= 00:14:45.945 12:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # IFS=: 00:14:45.945 12:34:28 -- accel/accel.sh@20 -- # read -r var val 00:14:48.479 12:34:30 -- accel/accel.sh@21 -- # val= 00:14:48.479 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:14:48.479 12:34:30 -- accel/accel.sh@21 -- # val= 00:14:48.479 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:14:48.479 12:34:30 -- accel/accel.sh@21 -- # val= 00:14:48.479 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:14:48.479 12:34:30 -- accel/accel.sh@21 -- # val= 00:14:48.479 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:14:48.479 12:34:30 -- accel/accel.sh@21 -- # val= 00:14:48.479 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:14:48.479 12:34:30 -- accel/accel.sh@21 -- # val= 00:14:48.479 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:14:48.479 12:34:30 -- accel/accel.sh@21 -- # val= 00:14:48.479 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:14:48.479 12:34:30 -- accel/accel.sh@21 -- # val= 00:14:48.479 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:14:48.479 12:34:30 -- accel/accel.sh@21 -- # val= 00:14:48.479 12:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # IFS=: 00:14:48.479 12:34:30 -- accel/accel.sh@20 -- # read -r var val 00:14:48.479 12:34:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:48.479 12:34:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:48.479 12:34:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:48.479 00:14:48.479 real 0m5.613s 00:14:48.479 user 0m16.472s 00:14:48.479 sys 0m0.400s 00:14:48.479 12:34:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.479 12:34:30 -- common/autotest_common.sh@10 -- # set +x 00:14:48.479 ************************************ 00:14:48.479 END TEST accel_decomp_mcore 00:14:48.479 ************************************ 00:14:48.479 12:34:30 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:48.479 12:34:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:48.479 12:34:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:48.479 12:34:30 -- common/autotest_common.sh@10 -- # set +x 00:14:48.479 ************************************ 00:14:48.479 START TEST accel_decomp_full_mcore 00:14:48.479 ************************************ 00:14:48.479 12:34:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:48.479 12:34:30 -- accel/accel.sh@16 -- # local accel_opc 00:14:48.479 12:34:30 -- accel/accel.sh@17 -- # local accel_module 00:14:48.479 12:34:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:48.479 12:34:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:48.479 12:34:30 -- accel/accel.sh@12 -- # build_accel_config 00:14:48.479 12:34:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:48.479 12:34:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:48.479 12:34:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:48.479 12:34:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:48.479 12:34:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:48.479 12:34:30 -- accel/accel.sh@41 -- # local IFS=, 00:14:48.479 12:34:30 -- accel/accel.sh@42 -- # jq -r . 00:14:48.479 [2024-10-01 12:34:30.723620] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:48.480 [2024-10-01 12:34:30.723757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108673 ] 00:14:48.480 [2024-10-01 12:34:30.900860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.739 [2024-10-01 12:34:31.091718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.739 [2024-10-01 12:34:31.091921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.739 [2024-10-01 12:34:31.092075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.739 [2024-10-01 12:34:31.092083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.274 12:34:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:51.274 00:14:51.274 SPDK Configuration: 00:14:51.274 Core mask: 0xf 00:14:51.274 00:14:51.274 Accel Perf Configuration: 00:14:51.274 Workload Type: decompress 00:14:51.274 Transfer size: 111250 bytes 00:14:51.274 Vector count 1 00:14:51.274 Module: software 00:14:51.274 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:51.274 Queue depth: 32 00:14:51.274 Allocate depth: 32 00:14:51.274 # threads/core: 1 00:14:51.274 Run time: 1 seconds 00:14:51.274 Verify: Yes 00:14:51.274 00:14:51.274 Running for 1 seconds... 00:14:51.274 00:14:51.274 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:51.274 ------------------------------------------------------------------------------------ 00:14:51.274 0,0 4544/s 187 MiB/s 0 0 00:14:51.274 3,0 4800/s 198 MiB/s 0 0 00:14:51.274 2,0 4800/s 198 MiB/s 0 0 00:14:51.274 1,0 4640/s 191 MiB/s 0 0 00:14:51.274 ==================================================================================== 00:14:51.274 Total 18784/s 1992 MiB/s 0 0' 00:14:51.274 12:34:33 -- accel/accel.sh@20 -- # IFS=: 00:14:51.274 12:34:33 -- accel/accel.sh@20 -- # read -r var val 00:14:51.274 12:34:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:51.274 12:34:33 -- accel/accel.sh@12 -- # build_accel_config 00:14:51.274 12:34:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:51.274 12:34:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:51.274 12:34:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:51.274 12:34:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:51.274 12:34:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:51.274 12:34:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:51.274 12:34:33 -- accel/accel.sh@41 -- # local IFS=, 00:14:51.274 12:34:33 -- accel/accel.sh@42 -- # jq -r . 00:14:51.274 [2024-10-01 12:34:33.548017] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:51.274 [2024-10-01 12:34:33.548159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108718 ] 00:14:51.274 [2024-10-01 12:34:33.725988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.533 [2024-10-01 12:34:33.953919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.533 [2024-10-01 12:34:33.954091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.533 [2024-10-01 12:34:33.954253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.533 [2024-10-01 12:34:33.954268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val= 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val= 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val= 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val=0xf 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val= 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val= 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val=decompress 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val= 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val=software 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@23 -- # accel_module=software 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val=32 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val=32 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val=1 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val=Yes 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val= 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:51.792 12:34:34 -- accel/accel.sh@21 -- # val= 00:14:51.792 12:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # IFS=: 00:14:51.792 12:34:34 -- accel/accel.sh@20 -- # read -r var val 00:14:54.324 12:34:36 -- accel/accel.sh@21 -- # val= 00:14:54.324 12:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # IFS=: 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # read -r var val 00:14:54.324 12:34:36 -- accel/accel.sh@21 -- # val= 00:14:54.324 12:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # IFS=: 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # read -r var val 00:14:54.324 12:34:36 -- accel/accel.sh@21 -- # val= 00:14:54.324 12:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # IFS=: 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # read -r var val 00:14:54.324 12:34:36 -- accel/accel.sh@21 -- # val= 00:14:54.324 12:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # IFS=: 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # read -r var val 00:14:54.324 12:34:36 -- accel/accel.sh@21 -- # val= 00:14:54.324 12:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # IFS=: 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # read -r var val 00:14:54.324 12:34:36 -- accel/accel.sh@21 -- # val= 00:14:54.324 12:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # IFS=: 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # read -r var val 00:14:54.324 12:34:36 -- accel/accel.sh@21 -- # val= 00:14:54.324 12:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # IFS=: 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # read -r var val 00:14:54.324 12:34:36 -- accel/accel.sh@21 -- # val= 00:14:54.324 12:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # IFS=: 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # read -r var val 00:14:54.324 12:34:36 -- accel/accel.sh@21 -- # val= 00:14:54.324 12:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # IFS=: 00:14:54.324 12:34:36 -- accel/accel.sh@20 -- # read -r var val 00:14:54.324 12:34:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:54.324 12:34:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:54.324 12:34:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:54.324 00:14:54.324 real 0m5.705s 00:14:54.324 user 0m16.809s 00:14:54.324 sys 0m0.432s 00:14:54.324 12:34:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.324 ************************************ 00:14:54.324 END TEST accel_decomp_full_mcore 00:14:54.324 ************************************ 00:14:54.324 12:34:36 -- common/autotest_common.sh@10 -- # set +x 00:14:54.324 12:34:36 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:54.324 12:34:36 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:14:54.324 12:34:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:54.324 12:34:36 -- common/autotest_common.sh@10 -- # set +x 00:14:54.324 ************************************ 00:14:54.324 START TEST accel_decomp_mthread 00:14:54.324 ************************************ 00:14:54.324 12:34:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:54.324 12:34:36 -- accel/accel.sh@16 -- # local accel_opc 00:14:54.324 12:34:36 -- accel/accel.sh@17 -- # local accel_module 00:14:54.324 12:34:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:54.324 12:34:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:54.324 12:34:36 -- accel/accel.sh@12 -- # build_accel_config 00:14:54.324 12:34:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:54.324 12:34:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:54.324 12:34:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:54.324 12:34:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:54.324 12:34:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:54.324 12:34:36 -- accel/accel.sh@41 -- # local IFS=, 00:14:54.324 12:34:36 -- accel/accel.sh@42 -- # jq -r . 00:14:54.324 [2024-10-01 12:34:36.506920] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:54.324 [2024-10-01 12:34:36.507113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108773 ] 00:14:54.324 [2024-10-01 12:34:36.672155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.582 [2024-10-01 12:34:36.856854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.115 12:34:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:14:57.115 00:14:57.115 SPDK Configuration: 00:14:57.115 Core mask: 0x1 00:14:57.115 00:14:57.115 Accel Perf Configuration: 00:14:57.115 Workload Type: decompress 00:14:57.115 Transfer size: 4096 bytes 00:14:57.115 Vector count 1 00:14:57.115 Module: software 00:14:57.115 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:57.115 Queue depth: 32 00:14:57.115 Allocate depth: 32 00:14:57.115 # threads/core: 2 00:14:57.115 Run time: 1 seconds 00:14:57.115 Verify: Yes 00:14:57.115 00:14:57.115 Running for 1 seconds... 00:14:57.115 00:14:57.115 Core,Thread Transfers Bandwidth Failed Miscompares 00:14:57.115 ------------------------------------------------------------------------------------ 00:14:57.115 0,1 32480/s 59 MiB/s 0 0 00:14:57.115 0,0 32384/s 59 MiB/s 0 0 00:14:57.115 ==================================================================================== 00:14:57.115 Total 64864/s 253 MiB/s 0 0' 00:14:57.115 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.115 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.115 12:34:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:57.115 12:34:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:57.115 12:34:39 -- accel/accel.sh@12 -- # build_accel_config 00:14:57.116 12:34:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:57.116 12:34:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:57.116 12:34:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:57.116 12:34:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:57.116 12:34:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:57.116 12:34:39 -- accel/accel.sh@41 -- # local IFS=, 00:14:57.116 12:34:39 -- accel/accel.sh@42 -- # jq -r . 00:14:57.116 [2024-10-01 12:34:39.210001] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:57.116 [2024-10-01 12:34:39.210140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108819 ] 00:14:57.116 [2024-10-01 12:34:39.375293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.116 [2024-10-01 12:34:39.605044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val= 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val= 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val= 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val=0x1 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val= 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val= 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val=decompress 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val= 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val=software 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@23 -- # accel_module=software 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val=32 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val=32 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val=2 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val=Yes 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val= 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:57.374 12:34:39 -- accel/accel.sh@21 -- # val= 00:14:57.374 12:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # IFS=: 00:14:57.374 12:34:39 -- accel/accel.sh@20 -- # read -r var val 00:14:59.908 12:34:41 -- accel/accel.sh@21 -- # val= 00:14:59.908 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:14:59.908 12:34:41 -- accel/accel.sh@21 -- # val= 00:14:59.908 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:14:59.908 12:34:41 -- accel/accel.sh@21 -- # val= 00:14:59.908 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:14:59.908 12:34:41 -- accel/accel.sh@21 -- # val= 00:14:59.908 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:14:59.908 12:34:41 -- accel/accel.sh@21 -- # val= 00:14:59.908 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:14:59.908 12:34:41 -- accel/accel.sh@21 -- # val= 00:14:59.908 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:14:59.908 12:34:41 -- accel/accel.sh@21 -- # val= 00:14:59.908 12:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # IFS=: 00:14:59.908 12:34:41 -- accel/accel.sh@20 -- # read -r var val 00:14:59.908 12:34:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:14:59.908 12:34:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:14:59.908 12:34:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:59.908 00:14:59.908 real 0m5.492s 00:14:59.908 user 0m4.943s 00:14:59.908 sys 0m0.362s 00:14:59.908 12:34:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.908 12:34:41 -- common/autotest_common.sh@10 -- # set +x 00:14:59.908 ************************************ 00:14:59.908 END TEST accel_decomp_mthread 00:14:59.908 ************************************ 00:14:59.908 12:34:41 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:59.908 12:34:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:59.908 12:34:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:59.908 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:14:59.908 ************************************ 00:14:59.908 START TEST accel_deomp_full_mthread 00:14:59.908 ************************************ 00:14:59.908 12:34:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:59.908 12:34:42 -- accel/accel.sh@16 -- # local accel_opc 00:14:59.908 12:34:42 -- accel/accel.sh@17 -- # local accel_module 00:14:59.908 12:34:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:59.908 12:34:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:59.908 12:34:42 -- accel/accel.sh@12 -- # build_accel_config 00:14:59.908 12:34:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:14:59.908 12:34:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:59.908 12:34:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:59.908 12:34:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:14:59.908 12:34:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:14:59.908 12:34:42 -- accel/accel.sh@41 -- # local IFS=, 00:14:59.908 12:34:42 -- accel/accel.sh@42 -- # jq -r . 00:14:59.908 [2024-10-01 12:34:42.068170] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:59.908 [2024-10-01 12:34:42.068291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108872 ] 00:14:59.908 [2024-10-01 12:34:42.232435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.908 [2024-10-01 12:34:42.420282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.444 12:34:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:15:02.444 00:15:02.444 SPDK Configuration: 00:15:02.444 Core mask: 0x1 00:15:02.444 00:15:02.444 Accel Perf Configuration: 00:15:02.444 Workload Type: decompress 00:15:02.444 Transfer size: 111250 bytes 00:15:02.444 Vector count 1 00:15:02.444 Module: software 00:15:02.444 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:15:02.444 Queue depth: 32 00:15:02.444 Allocate depth: 32 00:15:02.444 # threads/core: 2 00:15:02.444 Run time: 1 seconds 00:15:02.444 Verify: Yes 00:15:02.444 00:15:02.444 Running for 1 seconds... 00:15:02.444 00:15:02.444 Core,Thread Transfers Bandwidth Failed Miscompares 00:15:02.444 ------------------------------------------------------------------------------------ 00:15:02.444 0,1 2432/s 100 MiB/s 0 0 00:15:02.444 0,0 2368/s 97 MiB/s 0 0 00:15:02.444 ==================================================================================== 00:15:02.444 Total 4800/s 509 MiB/s 0 0' 00:15:02.444 12:34:44 -- accel/accel.sh@20 -- # IFS=: 00:15:02.444 12:34:44 -- accel/accel.sh@20 -- # read -r var val 00:15:02.444 12:34:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:15:02.444 12:34:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:15:02.444 12:34:44 -- accel/accel.sh@12 -- # build_accel_config 00:15:02.444 12:34:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:15:02.444 12:34:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:02.444 12:34:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:02.444 12:34:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:15:02.444 12:34:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:15:02.444 12:34:44 -- accel/accel.sh@41 -- # local IFS=, 00:15:02.444 12:34:44 -- accel/accel.sh@42 -- # jq -r . 00:15:02.444 [2024-10-01 12:34:44.806983] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:02.444 [2024-10-01 12:34:44.807127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108919 ] 00:15:02.444 [2024-10-01 12:34:44.970667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.703 [2024-10-01 12:34:45.185715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.962 12:34:45 -- accel/accel.sh@21 -- # val= 00:15:02.962 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.962 12:34:45 -- accel/accel.sh@21 -- # val= 00:15:02.962 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.962 12:34:45 -- accel/accel.sh@21 -- # val= 00:15:02.962 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.962 12:34:45 -- accel/accel.sh@21 -- # val=0x1 00:15:02.962 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.962 12:34:45 -- accel/accel.sh@21 -- # val= 00:15:02.962 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.962 12:34:45 -- accel/accel.sh@21 -- # val= 00:15:02.962 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.962 12:34:45 -- accel/accel.sh@21 -- # val=decompress 00:15:02.962 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.962 12:34:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.962 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.962 12:34:45 -- accel/accel.sh@21 -- # val='111250 bytes' 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.963 12:34:45 -- accel/accel.sh@21 -- # val= 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.963 12:34:45 -- accel/accel.sh@21 -- # val=software 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@23 -- # accel_module=software 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.963 12:34:45 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.963 12:34:45 -- accel/accel.sh@21 -- # val=32 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.963 12:34:45 -- accel/accel.sh@21 -- # val=32 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.963 12:34:45 -- accel/accel.sh@21 -- # val=2 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.963 12:34:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.963 12:34:45 -- accel/accel.sh@21 -- # val=Yes 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.963 12:34:45 -- accel/accel.sh@21 -- # val= 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:02.963 12:34:45 -- accel/accel.sh@21 -- # val= 00:15:02.963 12:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # IFS=: 00:15:02.963 12:34:45 -- accel/accel.sh@20 -- # read -r var val 00:15:05.498 12:34:47 -- accel/accel.sh@21 -- # val= 00:15:05.498 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:15:05.498 12:34:47 -- accel/accel.sh@21 -- # val= 00:15:05.498 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:15:05.498 12:34:47 -- accel/accel.sh@21 -- # val= 00:15:05.498 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:15:05.498 12:34:47 -- accel/accel.sh@21 -- # val= 00:15:05.498 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:15:05.498 12:34:47 -- accel/accel.sh@21 -- # val= 00:15:05.498 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:15:05.498 12:34:47 -- accel/accel.sh@21 -- # val= 00:15:05.498 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:15:05.498 12:34:47 -- accel/accel.sh@21 -- # val= 00:15:05.498 12:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # IFS=: 00:15:05.498 12:34:47 -- accel/accel.sh@20 -- # read -r var val 00:15:05.498 12:34:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:15:05.498 12:34:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:15:05.498 12:34:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:05.498 00:15:05.498 real 0m5.520s 00:15:05.498 user 0m4.979s 00:15:05.498 sys 0m0.370s 00:15:05.498 12:34:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.498 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:15:05.498 ************************************ 00:15:05.498 END TEST accel_deomp_full_mthread 00:15:05.498 ************************************ 00:15:05.498 12:34:47 -- accel/accel.sh@116 -- # [[ n == y ]] 00:15:05.498 12:34:47 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:15:05.498 12:34:47 -- accel/accel.sh@129 -- # build_accel_config 00:15:05.498 12:34:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:15:05.498 12:34:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:05.498 12:34:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:05.498 12:34:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:05.498 12:34:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:05.498 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:15:05.498 12:34:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:15:05.498 12:34:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:15:05.498 12:34:47 -- accel/accel.sh@41 -- # local IFS=, 00:15:05.498 12:34:47 -- accel/accel.sh@42 -- # jq -r . 00:15:05.498 ************************************ 00:15:05.498 START TEST accel_dif_functional_tests 00:15:05.498 ************************************ 00:15:05.498 12:34:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:15:05.498 [2024-10-01 12:34:47.695959] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:05.498 [2024-10-01 12:34:47.696080] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108969 ] 00:15:05.498 [2024-10-01 12:34:47.869751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:05.757 [2024-10-01 12:34:48.060850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.757 [2024-10-01 12:34:48.060806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.758 [2024-10-01 12:34:48.060847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.017 00:15:06.017 00:15:06.017 CUnit - A unit testing framework for C - Version 2.1-3 00:15:06.017 http://cunit.sourceforge.net/ 00:15:06.017 00:15:06.017 00:15:06.017 Suite: accel_dif 00:15:06.017 Test: verify: DIF generated, GUARD check ...passed 00:15:06.017 Test: verify: DIF generated, APPTAG check ...passed 00:15:06.017 Test: verify: DIF generated, REFTAG check ...passed 00:15:06.017 Test: verify: DIF not generated, GUARD check ...passed 00:15:06.017 Test: verify: DIF not generated, APPTAG check ...passed 00:15:06.017 Test: verify: DIF not generated, REFTAG check ...passed 00:15:06.017 Test: verify: APPTAG correct, APPTAG check ...passed 00:15:06.017 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:15:06.017 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:15:06.017 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:15:06.017 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-10-01 12:34:48.384646] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:15:06.017 [2024-10-01 12:34:48.384743] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:15:06.017 [2024-10-01 12:34:48.384815] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:15:06.017 [2024-10-01 12:34:48.384855] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:15:06.017 [2024-10-01 12:34:48.384892] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:15:06.017 [2024-10-01 12:34:48.384934] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:15:06.017 [2024-10-01 12:34:48.385050] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:15:06.017 passed 00:15:06.017 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:15:06.017 Test: generate copy: DIF generated, GUARD check ...passed 00:15:06.017 Test: generate copy: DIF generated, APTTAG check ...passed 00:15:06.017 Test: generate copy: DIF generated, REFTAG check ...passed 00:15:06.017 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:15:06.017 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:15:06.017 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-10-01 12:34:48.385235] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:15:06.017 passed 00:15:06.017 Test: generate copy: iovecs-len validate ...passed 00:15:06.017 Test: generate copy: buffer alignment validate ...passed 00:15:06.017 00:15:06.017 Run Summary: Type Total Ran Passed Failed Inactive 00:15:06.017 suites 1 1 n/a 0 0 00:15:06.017 tests 20 20 20 0 0 00:15:06.017 asserts 204 204 204 0 n/a 00:15:06.017 00:15:06.017 Elapsed time = 0.001 seconds 00:15:06.017 [2024-10-01 12:34:48.385609] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:15:07.397 00:15:07.397 real 0m1.966s 00:15:07.397 user 0m3.860s 00:15:07.397 sys 0m0.269s 00:15:07.397 ************************************ 00:15:07.397 END TEST accel_dif_functional_tests 00:15:07.397 ************************************ 00:15:07.397 12:34:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.397 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.397 ************************************ 00:15:07.397 END TEST accel 00:15:07.397 ************************************ 00:15:07.397 00:15:07.397 real 2m2.206s 00:15:07.397 user 2m14.437s 00:15:07.397 sys 0m9.824s 00:15:07.397 12:34:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.397 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.397 12:34:49 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:15:07.397 12:34:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:07.397 12:34:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:07.397 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.397 ************************************ 00:15:07.397 START TEST accel_rpc 00:15:07.397 ************************************ 00:15:07.397 12:34:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:15:07.397 * Looking for test storage... 00:15:07.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:15:07.397 12:34:49 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:07.397 12:34:49 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=109067 00:15:07.397 12:34:49 -- accel/accel_rpc.sh@15 -- # waitforlisten 109067 00:15:07.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.397 12:34:49 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:07.398 12:34:49 -- common/autotest_common.sh@819 -- # '[' -z 109067 ']' 00:15:07.398 12:34:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.398 12:34:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:07.398 12:34:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.398 12:34:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:07.398 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.398 [2024-10-01 12:34:49.922963] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:07.398 [2024-10-01 12:34:49.923120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109067 ] 00:15:07.657 [2024-10-01 12:34:50.088328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.915 [2024-10-01 12:34:50.280414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:07.915 [2024-10-01 12:34:50.280762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.483 12:34:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:08.483 12:34:50 -- common/autotest_common.sh@852 -- # return 0 00:15:08.483 12:34:50 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:15:08.483 12:34:50 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:15:08.483 12:34:50 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:15:08.483 12:34:50 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:15:08.483 12:34:50 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:15:08.483 12:34:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:08.483 12:34:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:08.483 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:08.483 ************************************ 00:15:08.483 START TEST accel_assign_opcode 00:15:08.483 ************************************ 00:15:08.483 12:34:50 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:15:08.483 12:34:50 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:15:08.483 12:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.483 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:08.483 [2024-10-01 12:34:50.724804] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:15:08.483 12:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.483 12:34:50 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:15:08.483 12:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.483 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:08.483 [2024-10-01 12:34:50.732764] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:15:08.483 12:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.483 12:34:50 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:15:08.483 12:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.483 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 12:34:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.051 12:34:51 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:15:09.051 12:34:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.051 12:34:51 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:15:09.051 12:34:51 -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 12:34:51 -- accel/accel_rpc.sh@42 -- # grep software 00:15:09.051 12:34:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.051 software 00:15:09.051 00:15:09.051 real 0m0.847s 00:15:09.051 user 0m0.042s 00:15:09.051 sys 0m0.022s 00:15:09.051 12:34:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.051 12:34:51 -- common/autotest_common.sh@10 -- # set +x 00:15:09.051 ************************************ 00:15:09.051 END TEST accel_assign_opcode 00:15:09.051 ************************************ 00:15:09.310 12:34:51 -- accel/accel_rpc.sh@55 -- # killprocess 109067 00:15:09.310 12:34:51 -- common/autotest_common.sh@926 -- # '[' -z 109067 ']' 00:15:09.310 12:34:51 -- common/autotest_common.sh@930 -- # kill -0 109067 00:15:09.310 12:34:51 -- common/autotest_common.sh@931 -- # uname 00:15:09.310 12:34:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:09.310 12:34:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 109067 00:15:09.310 12:34:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:09.310 12:34:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:09.310 12:34:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 109067' 00:15:09.310 killing process with pid 109067 00:15:09.310 12:34:51 -- common/autotest_common.sh@945 -- # kill 109067 00:15:09.310 12:34:51 -- common/autotest_common.sh@950 -- # wait 109067 00:15:11.846 00:15:11.846 real 0m4.183s 00:15:11.846 user 0m4.003s 00:15:11.846 sys 0m0.592s 00:15:11.846 12:34:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.846 12:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:11.846 ************************************ 00:15:11.846 END TEST accel_rpc 00:15:11.846 ************************************ 00:15:11.846 12:34:53 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:15:11.846 12:34:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:11.846 12:34:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:11.846 12:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:11.846 ************************************ 00:15:11.846 START TEST app_cmdline 00:15:11.846 ************************************ 00:15:11.846 12:34:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:15:11.846 * Looking for test storage... 00:15:11.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:15:11.846 12:34:54 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:15:11.846 12:34:54 -- app/cmdline.sh@17 -- # spdk_tgt_pid=109198 00:15:11.846 12:34:54 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:15:11.846 12:34:54 -- app/cmdline.sh@18 -- # waitforlisten 109198 00:15:11.846 12:34:54 -- common/autotest_common.sh@819 -- # '[' -z 109198 ']' 00:15:11.846 12:34:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.846 12:34:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:11.846 12:34:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.846 12:34:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:11.847 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:11.847 [2024-10-01 12:34:54.195674] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:11.847 [2024-10-01 12:34:54.195890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109198 ] 00:15:11.847 [2024-10-01 12:34:54.374400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.106 [2024-10-01 12:34:54.558813] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:12.106 [2024-10-01 12:34:54.559195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.485 12:34:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:13.485 12:34:55 -- common/autotest_common.sh@852 -- # return 0 00:15:13.485 12:34:55 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:15:13.485 { 00:15:13.485 "version": "SPDK v24.01.1-pre git sha1 726a04d70", 00:15:13.485 "fields": { 00:15:13.485 "major": 24, 00:15:13.485 "minor": 1, 00:15:13.485 "patch": 1, 00:15:13.485 "suffix": "-pre", 00:15:13.485 "commit": "726a04d70" 00:15:13.485 } 00:15:13.485 } 00:15:13.485 12:34:55 -- app/cmdline.sh@22 -- # expected_methods=() 00:15:13.485 12:34:55 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:15:13.485 12:34:55 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:15:13.485 12:34:55 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:15:13.485 12:34:55 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:15:13.485 12:34:55 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:15:13.485 12:34:55 -- app/cmdline.sh@26 -- # sort 00:15:13.485 12:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.485 12:34:55 -- common/autotest_common.sh@10 -- # set +x 00:15:13.485 12:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.485 12:34:55 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:15:13.485 12:34:55 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:15:13.485 12:34:55 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:13.485 12:34:55 -- common/autotest_common.sh@640 -- # local es=0 00:15:13.485 12:34:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:13.485 12:34:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.485 12:34:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:13.485 12:34:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.485 12:34:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:13.485 12:34:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.485 12:34:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:13.485 12:34:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.485 12:34:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:13.485 12:34:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:13.745 request: 00:15:13.745 { 00:15:13.745 "method": "env_dpdk_get_mem_stats", 00:15:13.745 "req_id": 1 00:15:13.745 } 00:15:13.745 Got JSON-RPC error response 00:15:13.745 response: 00:15:13.745 { 00:15:13.745 "code": -32601, 00:15:13.745 "message": "Method not found" 00:15:13.745 } 00:15:13.745 12:34:56 -- common/autotest_common.sh@643 -- # es=1 00:15:13.745 12:34:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:13.745 12:34:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:13.745 12:34:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:13.745 12:34:56 -- app/cmdline.sh@1 -- # killprocess 109198 00:15:13.745 12:34:56 -- common/autotest_common.sh@926 -- # '[' -z 109198 ']' 00:15:13.745 12:34:56 -- common/autotest_common.sh@930 -- # kill -0 109198 00:15:13.745 12:34:56 -- common/autotest_common.sh@931 -- # uname 00:15:13.745 12:34:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:13.745 12:34:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 109198 00:15:13.745 12:34:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:13.745 12:34:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:13.745 12:34:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 109198' 00:15:13.745 killing process with pid 109198 00:15:13.745 12:34:56 -- common/autotest_common.sh@945 -- # kill 109198 00:15:13.745 12:34:56 -- common/autotest_common.sh@950 -- # wait 109198 00:15:16.284 00:15:16.284 real 0m4.368s 00:15:16.284 user 0m4.687s 00:15:16.284 sys 0m0.560s 00:15:16.284 12:34:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.284 12:34:58 -- common/autotest_common.sh@10 -- # set +x 00:15:16.284 ************************************ 00:15:16.284 END TEST app_cmdline 00:15:16.284 ************************************ 00:15:16.284 12:34:58 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:15:16.284 12:34:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:16.284 12:34:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:16.284 12:34:58 -- common/autotest_common.sh@10 -- # set +x 00:15:16.284 ************************************ 00:15:16.284 START TEST version 00:15:16.284 ************************************ 00:15:16.284 12:34:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:15:16.284 * Looking for test storage... 00:15:16.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:15:16.284 12:34:58 -- app/version.sh@17 -- # get_header_version major 00:15:16.284 12:34:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:16.284 12:34:58 -- app/version.sh@14 -- # cut -f2 00:15:16.284 12:34:58 -- app/version.sh@14 -- # tr -d '"' 00:15:16.284 12:34:58 -- app/version.sh@17 -- # major=24 00:15:16.284 12:34:58 -- app/version.sh@18 -- # get_header_version minor 00:15:16.284 12:34:58 -- app/version.sh@14 -- # cut -f2 00:15:16.284 12:34:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:16.284 12:34:58 -- app/version.sh@14 -- # tr -d '"' 00:15:16.284 12:34:58 -- app/version.sh@18 -- # minor=1 00:15:16.284 12:34:58 -- app/version.sh@19 -- # get_header_version patch 00:15:16.284 12:34:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:16.284 12:34:58 -- app/version.sh@14 -- # tr -d '"' 00:15:16.284 12:34:58 -- app/version.sh@14 -- # cut -f2 00:15:16.284 12:34:58 -- app/version.sh@19 -- # patch=1 00:15:16.284 12:34:58 -- app/version.sh@20 -- # get_header_version suffix 00:15:16.284 12:34:58 -- app/version.sh@14 -- # tr -d '"' 00:15:16.284 12:34:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:16.284 12:34:58 -- app/version.sh@14 -- # cut -f2 00:15:16.284 12:34:58 -- app/version.sh@20 -- # suffix=-pre 00:15:16.284 12:34:58 -- app/version.sh@22 -- # version=24.1 00:15:16.284 12:34:58 -- app/version.sh@25 -- # (( patch != 0 )) 00:15:16.284 12:34:58 -- app/version.sh@25 -- # version=24.1.1 00:15:16.284 12:34:58 -- app/version.sh@28 -- # version=24.1.1rc0 00:15:16.284 12:34:58 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:16.284 12:34:58 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:15:16.284 12:34:58 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:15:16.284 12:34:58 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:15:16.284 00:15:16.284 real 0m0.197s 00:15:16.284 user 0m0.132s 00:15:16.284 sys 0m0.119s 00:15:16.284 12:34:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.284 12:34:58 -- common/autotest_common.sh@10 -- # set +x 00:15:16.284 ************************************ 00:15:16.284 END TEST version 00:15:16.284 ************************************ 00:15:16.284 12:34:58 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:15:16.284 12:34:58 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:15:16.284 12:34:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:16.284 12:34:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:16.284 12:34:58 -- common/autotest_common.sh@10 -- # set +x 00:15:16.285 ************************************ 00:15:16.285 START TEST blockdev_general 00:15:16.285 ************************************ 00:15:16.285 12:34:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:15:16.285 * Looking for test storage... 00:15:16.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:16.285 12:34:58 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:16.285 12:34:58 -- bdev/nbd_common.sh@6 -- # set -e 00:15:16.285 12:34:58 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:16.285 12:34:58 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:16.285 12:34:58 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:16.285 12:34:58 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:16.285 12:34:58 -- bdev/blockdev.sh@18 -- # : 00:15:16.285 12:34:58 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:15:16.285 12:34:58 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:15:16.285 12:34:58 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:15:16.545 12:34:58 -- bdev/blockdev.sh@672 -- # uname -s 00:15:16.545 12:34:58 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:15:16.545 12:34:58 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:15:16.545 12:34:58 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:15:16.545 12:34:58 -- bdev/blockdev.sh@681 -- # crypto_device= 00:15:16.545 12:34:58 -- bdev/blockdev.sh@682 -- # dek= 00:15:16.545 12:34:58 -- bdev/blockdev.sh@683 -- # env_ctx= 00:15:16.545 12:34:58 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:15:16.545 12:34:58 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:15:16.545 12:34:58 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:15:16.545 12:34:58 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:15:16.545 12:34:58 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:15:16.545 12:34:58 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=109379 00:15:16.545 12:34:58 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:16.545 12:34:58 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:15:16.545 12:34:58 -- bdev/blockdev.sh@47 -- # waitforlisten 109379 00:15:16.545 12:34:58 -- common/autotest_common.sh@819 -- # '[' -z 109379 ']' 00:15:16.545 12:34:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.545 12:34:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:16.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.545 12:34:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.545 12:34:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:16.545 12:34:58 -- common/autotest_common.sh@10 -- # set +x 00:15:16.545 [2024-10-01 12:34:58.894381] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:16.545 [2024-10-01 12:34:58.894524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109379 ] 00:15:16.545 [2024-10-01 12:34:59.055555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.803 [2024-10-01 12:34:59.242438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:16.803 [2024-10-01 12:34:59.242624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.371 12:34:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:17.371 12:34:59 -- common/autotest_common.sh@852 -- # return 0 00:15:17.371 12:34:59 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:15:17.371 12:34:59 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:15:17.371 12:34:59 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:15:17.371 12:34:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.371 12:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:18.310 [2024-10-01 12:35:00.519484] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:18.310 [2024-10-01 12:35:00.519558] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:18.310 00:15:18.310 [2024-10-01 12:35:00.527451] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:18.310 [2024-10-01 12:35:00.527506] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:18.310 00:15:18.310 Malloc0 00:15:18.310 Malloc1 00:15:18.310 Malloc2 00:15:18.310 Malloc3 00:15:18.310 Malloc4 00:15:18.310 Malloc5 00:15:18.569 Malloc6 00:15:18.569 Malloc7 00:15:18.569 Malloc8 00:15:18.569 Malloc9 00:15:18.569 [2024-10-01 12:35:00.977402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:18.569 [2024-10-01 12:35:00.977489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.569 [2024-10-01 12:35:00.977528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:18.570 [2024-10-01 12:35:00.977559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.570 [2024-10-01 12:35:00.979720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.570 [2024-10-01 12:35:00.979771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:18.570 TestPT 00:15:18.570 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.570 12:35:01 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:15:18.570 5000+0 records in 00:15:18.570 5000+0 records out 00:15:18.570 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0336079 s, 305 MB/s 00:15:18.570 12:35:01 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:15:18.570 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.570 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:18.830 AIO0 00:15:18.830 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.830 12:35:01 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:15:18.830 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.830 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:18.830 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.830 12:35:01 -- bdev/blockdev.sh@738 -- # cat 00:15:18.830 12:35:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:15:18.830 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.830 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:18.830 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.830 12:35:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:15:18.830 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.830 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:18.830 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.830 12:35:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:18.830 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.830 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:18.830 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.830 12:35:01 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:15:18.830 12:35:01 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:15:18.830 12:35:01 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:15:18.830 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.830 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:18.830 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.830 12:35:01 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:15:18.830 12:35:01 -- bdev/blockdev.sh@747 -- # jq -r .name 00:15:18.831 12:35:01 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3db6f0e5-1efd-4e10-aba9-fe4a77e38807"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3db6f0e5-1efd-4e10-aba9-fe4a77e38807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1ee996a5-1308-5678-a3ee-27c89587608b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1ee996a5-1308-5678-a3ee-27c89587608b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "54c2abfd-d526-5a53-8ffd-59f34db7ba3b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "54c2abfd-d526-5a53-8ffd-59f34db7ba3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bc0f33f8-6954-535e-8474-bfa1338d9296"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bc0f33f8-6954-535e-8474-bfa1338d9296",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "af4fd4b4-4d4f-52d6-92b7-32a39d8fab2f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "af4fd4b4-4d4f-52d6-92b7-32a39d8fab2f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "30347733-0a7b-53d5-97f2-5b7c5ed617aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "30347733-0a7b-53d5-97f2-5b7c5ed617aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "1a8f75ca-d20f-5e7d-8250-90cd9fbc8bef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1a8f75ca-d20f-5e7d-8250-90cd9fbc8bef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9ccaa916-eb43-5920-aec8-32dfa368941d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ccaa916-eb43-5920-aec8-32dfa368941d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "d939a2a7-a886-52c3-8e99-eb4b0df18d52"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d939a2a7-a886-52c3-8e99-eb4b0df18d52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "9c0eda2c-25ec-5379-9823-301d5d73b3e0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9c0eda2c-25ec-5379-9823-301d5d73b3e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "324ca86c-bc11-5a5c-b54f-e0c6d826d185"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "324ca86c-bc11-5a5c-b54f-e0c6d826d185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "d72252d0-c185-549e-b58a-ba8a2f73cc89"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d72252d0-c185-549e-b58a-ba8a2f73cc89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "8a119d07-1c81-464a-b953-f21ff82ac9b0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8a119d07-1c81-464a-b953-f21ff82ac9b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8a119d07-1c81-464a-b953-f21ff82ac9b0",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "089d8dae-30fd-431b-ae45-ca7ca2919929",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8c6eccc2-4007-4b3b-ab05-a84cddc860ad",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "92c2d48f-e61d-4ef9-a606-4fc2057d7a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "92c2d48f-e61d-4ef9-a606-4fc2057d7a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "92c2d48f-e61d-4ef9-a606-4fc2057d7a94",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "fd82e449-01af-4383-9e2b-20df2fa74a41",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4e511596-0ded-419c-8a7d-7f7373835034",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d3120532-d5df-4159-83db-435f7c41551f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d3120532-d5df-4159-83db-435f7c41551f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d3120532-d5df-4159-83db-435f7c41551f",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "84fb3ce4-5842-46fd-81cf-2dbc2040d858",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b5a51e76-dd2b-4789-a94a-00440461fd6e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b82abbcb-338b-451f-9bf1-db267deaea60"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b82abbcb-338b-451f-9bf1-db267deaea60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:15:18.831 12:35:01 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:15:18.831 12:35:01 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:15:18.831 12:35:01 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:15:18.831 12:35:01 -- bdev/blockdev.sh@752 -- # killprocess 109379 00:15:18.831 12:35:01 -- common/autotest_common.sh@926 -- # '[' -z 109379 ']' 00:15:18.831 12:35:01 -- common/autotest_common.sh@930 -- # kill -0 109379 00:15:18.831 12:35:01 -- common/autotest_common.sh@931 -- # uname 00:15:18.831 12:35:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:18.831 12:35:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 109379 00:15:18.831 12:35:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:18.831 killing process with pid 109379 00:15:18.831 12:35:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:18.831 12:35:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 109379' 00:15:18.831 12:35:01 -- common/autotest_common.sh@945 -- # kill 109379 00:15:18.831 12:35:01 -- common/autotest_common.sh@950 -- # wait 109379 00:15:22.124 12:35:04 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:22.124 12:35:04 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:15:22.124 12:35:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:15:22.124 12:35:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:22.124 12:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:22.124 ************************************ 00:15:22.124 START TEST bdev_hello_world 00:15:22.124 ************************************ 00:15:22.124 12:35:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:15:22.124 [2024-10-01 12:35:04.598399] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:22.124 [2024-10-01 12:35:04.598544] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109482 ] 00:15:22.385 [2024-10-01 12:35:04.763928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.645 [2024-10-01 12:35:04.942924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.905 [2024-10-01 12:35:05.338041] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:22.905 [2024-10-01 12:35:05.338120] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:22.905 [2024-10-01 12:35:05.345974] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:22.905 [2024-10-01 12:35:05.346053] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:22.905 [2024-10-01 12:35:05.353972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:22.905 [2024-10-01 12:35:05.354009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:22.905 [2024-10-01 12:35:05.354048] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:23.164 [2024-10-01 12:35:05.574441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:23.164 [2024-10-01 12:35:05.574536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.164 [2024-10-01 12:35:05.574580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:23.164 [2024-10-01 12:35:05.574603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.164 [2024-10-01 12:35:05.576790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.164 [2024-10-01 12:35:05.576844] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:23.423 [2024-10-01 12:35:05.906644] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:23.423 [2024-10-01 12:35:05.906812] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:15:23.423 [2024-10-01 12:35:05.906993] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:23.423 [2024-10-01 12:35:05.907115] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:23.423 [2024-10-01 12:35:05.907267] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:23.423 [2024-10-01 12:35:05.907328] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:23.423 [2024-10-01 12:35:05.907469] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:23.423 00:15:23.423 [2024-10-01 12:35:05.907574] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:25.961 00:15:25.961 real 0m3.393s 00:15:25.961 user 0m2.897s 00:15:25.961 sys 0m0.329s 00:15:25.961 12:35:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.961 12:35:07 -- common/autotest_common.sh@10 -- # set +x 00:15:25.961 ************************************ 00:15:25.961 END TEST bdev_hello_world 00:15:25.961 ************************************ 00:15:25.961 12:35:07 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:15:25.961 12:35:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:25.961 12:35:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:25.961 12:35:07 -- common/autotest_common.sh@10 -- # set +x 00:15:25.961 ************************************ 00:15:25.961 START TEST bdev_bounds 00:15:25.961 ************************************ 00:15:25.961 12:35:07 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:15:25.961 12:35:07 -- bdev/blockdev.sh@288 -- # bdevio_pid=109545 00:15:25.961 12:35:07 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:25.961 12:35:07 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:25.961 Process bdevio pid: 109545 00:15:25.961 12:35:07 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 109545' 00:15:25.961 12:35:07 -- bdev/blockdev.sh@291 -- # waitforlisten 109545 00:15:25.961 12:35:07 -- common/autotest_common.sh@819 -- # '[' -z 109545 ']' 00:15:25.961 12:35:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.961 12:35:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:25.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.961 12:35:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.961 12:35:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:25.961 12:35:07 -- common/autotest_common.sh@10 -- # set +x 00:15:25.961 [2024-10-01 12:35:08.062548] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:25.961 [2024-10-01 12:35:08.062675] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109545 ] 00:15:25.961 [2024-10-01 12:35:08.235128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.962 [2024-10-01 12:35:08.432924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.962 [2024-10-01 12:35:08.433426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.962 [2024-10-01 12:35:08.433431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.529 [2024-10-01 12:35:08.874383] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:26.529 [2024-10-01 12:35:08.874668] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:26.529 [2024-10-01 12:35:08.882329] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:26.529 [2024-10-01 12:35:08.882498] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:26.529 [2024-10-01 12:35:08.890358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:26.529 [2024-10-01 12:35:08.890490] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:26.529 [2024-10-01 12:35:08.890584] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:26.788 [2024-10-01 12:35:09.124483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:26.788 [2024-10-01 12:35:09.124992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.788 [2024-10-01 12:35:09.125145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:26.788 [2024-10-01 12:35:09.125241] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.788 [2024-10-01 12:35:09.127602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.788 [2024-10-01 12:35:09.127767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:27.080 12:35:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:27.080 12:35:09 -- common/autotest_common.sh@852 -- # return 0 00:15:27.080 12:35:09 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:27.340 I/O targets: 00:15:27.340 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:15:27.340 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:15:27.340 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:15:27.340 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:15:27.340 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:15:27.340 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:15:27.340 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:15:27.340 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:15:27.340 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:15:27.340 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:15:27.340 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:15:27.340 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:15:27.340 raid0: 131072 blocks of 512 bytes (64 MiB) 00:15:27.340 concat0: 131072 blocks of 512 bytes (64 MiB) 00:15:27.340 raid1: 65536 blocks of 512 bytes (32 MiB) 00:15:27.340 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:15:27.340 00:15:27.340 00:15:27.340 CUnit - A unit testing framework for C - Version 2.1-3 00:15:27.340 http://cunit.sourceforge.net/ 00:15:27.340 00:15:27.340 00:15:27.340 Suite: bdevio tests on: AIO0 00:15:27.340 Test: blockdev write read block ...passed 00:15:27.340 Test: blockdev write zeroes read block ...passed 00:15:27.340 Test: blockdev write zeroes read no split ...passed 00:15:27.340 Test: blockdev write zeroes read split ...passed 00:15:27.340 Test: blockdev write zeroes read split partial ...passed 00:15:27.340 Test: blockdev reset ...passed 00:15:27.340 Test: blockdev write read 8 blocks ...passed 00:15:27.340 Test: blockdev write read size > 128k ...passed 00:15:27.340 Test: blockdev write read invalid size ...passed 00:15:27.340 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.340 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.340 Test: blockdev write read max offset ...passed 00:15:27.340 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.340 Test: blockdev writev readv 8 blocks ...passed 00:15:27.340 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.340 Test: blockdev writev readv block ...passed 00:15:27.340 Test: blockdev writev readv size > 128k ...passed 00:15:27.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.340 Test: blockdev comparev and writev ...passed 00:15:27.340 Test: blockdev nvme passthru rw ...passed 00:15:27.340 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.340 Test: blockdev nvme admin passthru ...passed 00:15:27.340 Test: blockdev copy ...passed 00:15:27.340 Suite: bdevio tests on: raid1 00:15:27.340 Test: blockdev write read block ...passed 00:15:27.340 Test: blockdev write zeroes read block ...passed 00:15:27.340 Test: blockdev write zeroes read no split ...passed 00:15:27.340 Test: blockdev write zeroes read split ...passed 00:15:27.340 Test: blockdev write zeroes read split partial ...passed 00:15:27.340 Test: blockdev reset ...passed 00:15:27.340 Test: blockdev write read 8 blocks ...passed 00:15:27.340 Test: blockdev write read size > 128k ...passed 00:15:27.340 Test: blockdev write read invalid size ...passed 00:15:27.340 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.340 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.340 Test: blockdev write read max offset ...passed 00:15:27.340 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.340 Test: blockdev writev readv 8 blocks ...passed 00:15:27.340 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.340 Test: blockdev writev readv block ...passed 00:15:27.340 Test: blockdev writev readv size > 128k ...passed 00:15:27.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.340 Test: blockdev comparev and writev ...passed 00:15:27.340 Test: blockdev nvme passthru rw ...passed 00:15:27.340 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.340 Test: blockdev nvme admin passthru ...passed 00:15:27.340 Test: blockdev copy ...passed 00:15:27.340 Suite: bdevio tests on: concat0 00:15:27.340 Test: blockdev write read block ...passed 00:15:27.340 Test: blockdev write zeroes read block ...passed 00:15:27.340 Test: blockdev write zeroes read no split ...passed 00:15:27.340 Test: blockdev write zeroes read split ...passed 00:15:27.600 Test: blockdev write zeroes read split partial ...passed 00:15:27.600 Test: blockdev reset ...passed 00:15:27.600 Test: blockdev write read 8 blocks ...passed 00:15:27.600 Test: blockdev write read size > 128k ...passed 00:15:27.600 Test: blockdev write read invalid size ...passed 00:15:27.600 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.600 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.600 Test: blockdev write read max offset ...passed 00:15:27.600 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.600 Test: blockdev writev readv 8 blocks ...passed 00:15:27.600 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.600 Test: blockdev writev readv block ...passed 00:15:27.600 Test: blockdev writev readv size > 128k ...passed 00:15:27.600 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.600 Test: blockdev comparev and writev ...passed 00:15:27.600 Test: blockdev nvme passthru rw ...passed 00:15:27.600 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.600 Test: blockdev nvme admin passthru ...passed 00:15:27.600 Test: blockdev copy ...passed 00:15:27.600 Suite: bdevio tests on: raid0 00:15:27.600 Test: blockdev write read block ...passed 00:15:27.600 Test: blockdev write zeroes read block ...passed 00:15:27.600 Test: blockdev write zeroes read no split ...passed 00:15:27.600 Test: blockdev write zeroes read split ...passed 00:15:27.600 Test: blockdev write zeroes read split partial ...passed 00:15:27.600 Test: blockdev reset ...passed 00:15:27.600 Test: blockdev write read 8 blocks ...passed 00:15:27.600 Test: blockdev write read size > 128k ...passed 00:15:27.600 Test: blockdev write read invalid size ...passed 00:15:27.600 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.600 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.600 Test: blockdev write read max offset ...passed 00:15:27.600 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.600 Test: blockdev writev readv 8 blocks ...passed 00:15:27.600 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.600 Test: blockdev writev readv block ...passed 00:15:27.600 Test: blockdev writev readv size > 128k ...passed 00:15:27.600 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.600 Test: blockdev comparev and writev ...passed 00:15:27.600 Test: blockdev nvme passthru rw ...passed 00:15:27.600 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.600 Test: blockdev nvme admin passthru ...passed 00:15:27.600 Test: blockdev copy ...passed 00:15:27.600 Suite: bdevio tests on: TestPT 00:15:27.600 Test: blockdev write read block ...passed 00:15:27.600 Test: blockdev write zeroes read block ...passed 00:15:27.600 Test: blockdev write zeroes read no split ...passed 00:15:27.600 Test: blockdev write zeroes read split ...passed 00:15:27.600 Test: blockdev write zeroes read split partial ...passed 00:15:27.600 Test: blockdev reset ...passed 00:15:27.600 Test: blockdev write read 8 blocks ...passed 00:15:27.600 Test: blockdev write read size > 128k ...passed 00:15:27.600 Test: blockdev write read invalid size ...passed 00:15:27.600 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.600 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.600 Test: blockdev write read max offset ...passed 00:15:27.600 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.600 Test: blockdev writev readv 8 blocks ...passed 00:15:27.600 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.600 Test: blockdev writev readv block ...passed 00:15:27.600 Test: blockdev writev readv size > 128k ...passed 00:15:27.600 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.600 Test: blockdev comparev and writev ...passed 00:15:27.600 Test: blockdev nvme passthru rw ...passed 00:15:27.600 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.600 Test: blockdev nvme admin passthru ...passed 00:15:27.600 Test: blockdev copy ...passed 00:15:27.600 Suite: bdevio tests on: Malloc2p7 00:15:27.600 Test: blockdev write read block ...passed 00:15:27.600 Test: blockdev write zeroes read block ...passed 00:15:27.600 Test: blockdev write zeroes read no split ...passed 00:15:27.600 Test: blockdev write zeroes read split ...passed 00:15:27.600 Test: blockdev write zeroes read split partial ...passed 00:15:27.600 Test: blockdev reset ...passed 00:15:27.600 Test: blockdev write read 8 blocks ...passed 00:15:27.600 Test: blockdev write read size > 128k ...passed 00:15:27.600 Test: blockdev write read invalid size ...passed 00:15:27.600 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.600 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.600 Test: blockdev write read max offset ...passed 00:15:27.600 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.600 Test: blockdev writev readv 8 blocks ...passed 00:15:27.600 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.600 Test: blockdev writev readv block ...passed 00:15:27.600 Test: blockdev writev readv size > 128k ...passed 00:15:27.600 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.600 Test: blockdev comparev and writev ...passed 00:15:27.601 Test: blockdev nvme passthru rw ...passed 00:15:27.601 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.601 Test: blockdev nvme admin passthru ...passed 00:15:27.601 Test: blockdev copy ...passed 00:15:27.601 Suite: bdevio tests on: Malloc2p6 00:15:27.601 Test: blockdev write read block ...passed 00:15:27.601 Test: blockdev write zeroes read block ...passed 00:15:27.601 Test: blockdev write zeroes read no split ...passed 00:15:27.860 Test: blockdev write zeroes read split ...passed 00:15:27.860 Test: blockdev write zeroes read split partial ...passed 00:15:27.860 Test: blockdev reset ...passed 00:15:27.860 Test: blockdev write read 8 blocks ...passed 00:15:27.860 Test: blockdev write read size > 128k ...passed 00:15:27.860 Test: blockdev write read invalid size ...passed 00:15:27.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.860 Test: blockdev write read max offset ...passed 00:15:27.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.860 Test: blockdev writev readv 8 blocks ...passed 00:15:27.860 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.860 Test: blockdev writev readv block ...passed 00:15:27.860 Test: blockdev writev readv size > 128k ...passed 00:15:27.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.860 Test: blockdev comparev and writev ...passed 00:15:27.860 Test: blockdev nvme passthru rw ...passed 00:15:27.860 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.860 Test: blockdev nvme admin passthru ...passed 00:15:27.860 Test: blockdev copy ...passed 00:15:27.860 Suite: bdevio tests on: Malloc2p5 00:15:27.860 Test: blockdev write read block ...passed 00:15:27.860 Test: blockdev write zeroes read block ...passed 00:15:27.860 Test: blockdev write zeroes read no split ...passed 00:15:27.860 Test: blockdev write zeroes read split ...passed 00:15:27.860 Test: blockdev write zeroes read split partial ...passed 00:15:27.860 Test: blockdev reset ...passed 00:15:27.860 Test: blockdev write read 8 blocks ...passed 00:15:27.860 Test: blockdev write read size > 128k ...passed 00:15:27.860 Test: blockdev write read invalid size ...passed 00:15:27.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.860 Test: blockdev write read max offset ...passed 00:15:27.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.860 Test: blockdev writev readv 8 blocks ...passed 00:15:27.860 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.860 Test: blockdev writev readv block ...passed 00:15:27.860 Test: blockdev writev readv size > 128k ...passed 00:15:27.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.860 Test: blockdev comparev and writev ...passed 00:15:27.860 Test: blockdev nvme passthru rw ...passed 00:15:27.860 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.860 Test: blockdev nvme admin passthru ...passed 00:15:27.860 Test: blockdev copy ...passed 00:15:27.860 Suite: bdevio tests on: Malloc2p4 00:15:27.860 Test: blockdev write read block ...passed 00:15:27.860 Test: blockdev write zeroes read block ...passed 00:15:27.860 Test: blockdev write zeroes read no split ...passed 00:15:27.860 Test: blockdev write zeroes read split ...passed 00:15:27.860 Test: blockdev write zeroes read split partial ...passed 00:15:27.860 Test: blockdev reset ...passed 00:15:27.860 Test: blockdev write read 8 blocks ...passed 00:15:27.860 Test: blockdev write read size > 128k ...passed 00:15:27.860 Test: blockdev write read invalid size ...passed 00:15:27.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.860 Test: blockdev write read max offset ...passed 00:15:27.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.860 Test: blockdev writev readv 8 blocks ...passed 00:15:27.860 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.860 Test: blockdev writev readv block ...passed 00:15:27.860 Test: blockdev writev readv size > 128k ...passed 00:15:27.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.860 Test: blockdev comparev and writev ...passed 00:15:27.860 Test: blockdev nvme passthru rw ...passed 00:15:27.860 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.860 Test: blockdev nvme admin passthru ...passed 00:15:27.860 Test: blockdev copy ...passed 00:15:27.860 Suite: bdevio tests on: Malloc2p3 00:15:27.860 Test: blockdev write read block ...passed 00:15:27.860 Test: blockdev write zeroes read block ...passed 00:15:27.860 Test: blockdev write zeroes read no split ...passed 00:15:27.860 Test: blockdev write zeroes read split ...passed 00:15:27.860 Test: blockdev write zeroes read split partial ...passed 00:15:27.860 Test: blockdev reset ...passed 00:15:27.860 Test: blockdev write read 8 blocks ...passed 00:15:27.860 Test: blockdev write read size > 128k ...passed 00:15:27.860 Test: blockdev write read invalid size ...passed 00:15:27.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.860 Test: blockdev write read max offset ...passed 00:15:27.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.860 Test: blockdev writev readv 8 blocks ...passed 00:15:27.860 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.860 Test: blockdev writev readv block ...passed 00:15:27.860 Test: blockdev writev readv size > 128k ...passed 00:15:27.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.860 Test: blockdev comparev and writev ...passed 00:15:27.860 Test: blockdev nvme passthru rw ...passed 00:15:27.860 Test: blockdev nvme passthru vendor specific ...passed 00:15:27.860 Test: blockdev nvme admin passthru ...passed 00:15:27.860 Test: blockdev copy ...passed 00:15:27.860 Suite: bdevio tests on: Malloc2p2 00:15:27.860 Test: blockdev write read block ...passed 00:15:27.860 Test: blockdev write zeroes read block ...passed 00:15:27.860 Test: blockdev write zeroes read no split ...passed 00:15:28.119 Test: blockdev write zeroes read split ...passed 00:15:28.119 Test: blockdev write zeroes read split partial ...passed 00:15:28.119 Test: blockdev reset ...passed 00:15:28.119 Test: blockdev write read 8 blocks ...passed 00:15:28.119 Test: blockdev write read size > 128k ...passed 00:15:28.119 Test: blockdev write read invalid size ...passed 00:15:28.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:28.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:28.119 Test: blockdev write read max offset ...passed 00:15:28.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:28.119 Test: blockdev writev readv 8 blocks ...passed 00:15:28.119 Test: blockdev writev readv 30 x 1block ...passed 00:15:28.119 Test: blockdev writev readv block ...passed 00:15:28.119 Test: blockdev writev readv size > 128k ...passed 00:15:28.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:28.119 Test: blockdev comparev and writev ...passed 00:15:28.119 Test: blockdev nvme passthru rw ...passed 00:15:28.119 Test: blockdev nvme passthru vendor specific ...passed 00:15:28.119 Test: blockdev nvme admin passthru ...passed 00:15:28.119 Test: blockdev copy ...passed 00:15:28.119 Suite: bdevio tests on: Malloc2p1 00:15:28.119 Test: blockdev write read block ...passed 00:15:28.119 Test: blockdev write zeroes read block ...passed 00:15:28.119 Test: blockdev write zeroes read no split ...passed 00:15:28.119 Test: blockdev write zeroes read split ...passed 00:15:28.119 Test: blockdev write zeroes read split partial ...passed 00:15:28.119 Test: blockdev reset ...passed 00:15:28.119 Test: blockdev write read 8 blocks ...passed 00:15:28.119 Test: blockdev write read size > 128k ...passed 00:15:28.119 Test: blockdev write read invalid size ...passed 00:15:28.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:28.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:28.119 Test: blockdev write read max offset ...passed 00:15:28.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:28.119 Test: blockdev writev readv 8 blocks ...passed 00:15:28.119 Test: blockdev writev readv 30 x 1block ...passed 00:15:28.119 Test: blockdev writev readv block ...passed 00:15:28.119 Test: blockdev writev readv size > 128k ...passed 00:15:28.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:28.119 Test: blockdev comparev and writev ...passed 00:15:28.119 Test: blockdev nvme passthru rw ...passed 00:15:28.119 Test: blockdev nvme passthru vendor specific ...passed 00:15:28.119 Test: blockdev nvme admin passthru ...passed 00:15:28.119 Test: blockdev copy ...passed 00:15:28.119 Suite: bdevio tests on: Malloc2p0 00:15:28.119 Test: blockdev write read block ...passed 00:15:28.119 Test: blockdev write zeroes read block ...passed 00:15:28.119 Test: blockdev write zeroes read no split ...passed 00:15:28.119 Test: blockdev write zeroes read split ...passed 00:15:28.119 Test: blockdev write zeroes read split partial ...passed 00:15:28.119 Test: blockdev reset ...passed 00:15:28.119 Test: blockdev write read 8 blocks ...passed 00:15:28.119 Test: blockdev write read size > 128k ...passed 00:15:28.119 Test: blockdev write read invalid size ...passed 00:15:28.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:28.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:28.120 Test: blockdev write read max offset ...passed 00:15:28.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:28.120 Test: blockdev writev readv 8 blocks ...passed 00:15:28.120 Test: blockdev writev readv 30 x 1block ...passed 00:15:28.120 Test: blockdev writev readv block ...passed 00:15:28.120 Test: blockdev writev readv size > 128k ...passed 00:15:28.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:28.120 Test: blockdev comparev and writev ...passed 00:15:28.120 Test: blockdev nvme passthru rw ...passed 00:15:28.120 Test: blockdev nvme passthru vendor specific ...passed 00:15:28.120 Test: blockdev nvme admin passthru ...passed 00:15:28.120 Test: blockdev copy ...passed 00:15:28.120 Suite: bdevio tests on: Malloc1p1 00:15:28.120 Test: blockdev write read block ...passed 00:15:28.120 Test: blockdev write zeroes read block ...passed 00:15:28.120 Test: blockdev write zeroes read no split ...passed 00:15:28.120 Test: blockdev write zeroes read split ...passed 00:15:28.120 Test: blockdev write zeroes read split partial ...passed 00:15:28.120 Test: blockdev reset ...passed 00:15:28.120 Test: blockdev write read 8 blocks ...passed 00:15:28.120 Test: blockdev write read size > 128k ...passed 00:15:28.120 Test: blockdev write read invalid size ...passed 00:15:28.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:28.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:28.120 Test: blockdev write read max offset ...passed 00:15:28.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:28.120 Test: blockdev writev readv 8 blocks ...passed 00:15:28.120 Test: blockdev writev readv 30 x 1block ...passed 00:15:28.120 Test: blockdev writev readv block ...passed 00:15:28.120 Test: blockdev writev readv size > 128k ...passed 00:15:28.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:28.120 Test: blockdev comparev and writev ...passed 00:15:28.120 Test: blockdev nvme passthru rw ...passed 00:15:28.120 Test: blockdev nvme passthru vendor specific ...passed 00:15:28.120 Test: blockdev nvme admin passthru ...passed 00:15:28.120 Test: blockdev copy ...passed 00:15:28.120 Suite: bdevio tests on: Malloc1p0 00:15:28.120 Test: blockdev write read block ...passed 00:15:28.120 Test: blockdev write zeroes read block ...passed 00:15:28.120 Test: blockdev write zeroes read no split ...passed 00:15:28.379 Test: blockdev write zeroes read split ...passed 00:15:28.379 Test: blockdev write zeroes read split partial ...passed 00:15:28.379 Test: blockdev reset ...passed 00:15:28.379 Test: blockdev write read 8 blocks ...passed 00:15:28.379 Test: blockdev write read size > 128k ...passed 00:15:28.379 Test: blockdev write read invalid size ...passed 00:15:28.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:28.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:28.379 Test: blockdev write read max offset ...passed 00:15:28.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:28.379 Test: blockdev writev readv 8 blocks ...passed 00:15:28.379 Test: blockdev writev readv 30 x 1block ...passed 00:15:28.379 Test: blockdev writev readv block ...passed 00:15:28.379 Test: blockdev writev readv size > 128k ...passed 00:15:28.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:28.379 Test: blockdev comparev and writev ...passed 00:15:28.379 Test: blockdev nvme passthru rw ...passed 00:15:28.379 Test: blockdev nvme passthru vendor specific ...passed 00:15:28.379 Test: blockdev nvme admin passthru ...passed 00:15:28.379 Test: blockdev copy ...passed 00:15:28.379 Suite: bdevio tests on: Malloc0 00:15:28.379 Test: blockdev write read block ...passed 00:15:28.379 Test: blockdev write zeroes read block ...passed 00:15:28.379 Test: blockdev write zeroes read no split ...passed 00:15:28.379 Test: blockdev write zeroes read split ...passed 00:15:28.379 Test: blockdev write zeroes read split partial ...passed 00:15:28.379 Test: blockdev reset ...passed 00:15:28.379 Test: blockdev write read 8 blocks ...passed 00:15:28.379 Test: blockdev write read size > 128k ...passed 00:15:28.379 Test: blockdev write read invalid size ...passed 00:15:28.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:28.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:28.379 Test: blockdev write read max offset ...passed 00:15:28.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:28.379 Test: blockdev writev readv 8 blocks ...passed 00:15:28.379 Test: blockdev writev readv 30 x 1block ...passed 00:15:28.379 Test: blockdev writev readv block ...passed 00:15:28.379 Test: blockdev writev readv size > 128k ...passed 00:15:28.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:28.379 Test: blockdev comparev and writev ...passed 00:15:28.379 Test: blockdev nvme passthru rw ...passed 00:15:28.379 Test: blockdev nvme passthru vendor specific ...passed 00:15:28.379 Test: blockdev nvme admin passthru ...passed 00:15:28.379 Test: blockdev copy ...passed 00:15:28.379 00:15:28.379 Run Summary: Type Total Ran Passed Failed Inactive 00:15:28.379 suites 16 16 n/a 0 0 00:15:28.379 tests 368 368 368 0 0 00:15:28.379 asserts 2224 2224 2224 0 n/a 00:15:28.379 00:15:28.379 Elapsed time = 3.279 seconds 00:15:28.379 0 00:15:28.379 12:35:10 -- bdev/blockdev.sh@293 -- # killprocess 109545 00:15:28.379 12:35:10 -- common/autotest_common.sh@926 -- # '[' -z 109545 ']' 00:15:28.379 12:35:10 -- common/autotest_common.sh@930 -- # kill -0 109545 00:15:28.379 12:35:10 -- common/autotest_common.sh@931 -- # uname 00:15:28.379 12:35:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:28.379 12:35:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 109545 00:15:28.379 12:35:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:28.379 12:35:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:28.379 12:35:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 109545' 00:15:28.379 killing process with pid 109545 00:15:28.379 12:35:10 -- common/autotest_common.sh@945 -- # kill 109545 00:15:28.379 12:35:10 -- common/autotest_common.sh@950 -- # wait 109545 00:15:30.912 12:35:12 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:15:30.912 00:15:30.912 real 0m5.004s 00:15:30.912 user 0m12.944s 00:15:30.912 sys 0m0.540s 00:15:30.913 12:35:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.913 12:35:12 -- common/autotest_common.sh@10 -- # set +x 00:15:30.913 ************************************ 00:15:30.913 END TEST bdev_bounds 00:15:30.913 ************************************ 00:15:30.913 12:35:13 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:15:30.913 12:35:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:30.913 12:35:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:30.913 12:35:13 -- common/autotest_common.sh@10 -- # set +x 00:15:30.913 ************************************ 00:15:30.913 START TEST bdev_nbd 00:15:30.913 ************************************ 00:15:30.913 12:35:13 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:15:30.913 12:35:13 -- bdev/blockdev.sh@298 -- # uname -s 00:15:30.913 12:35:13 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:15:30.913 12:35:13 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:30.913 12:35:13 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:30.913 12:35:13 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:15:30.913 12:35:13 -- bdev/blockdev.sh@302 -- # local bdev_all 00:15:30.913 12:35:13 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:15:30.913 12:35:13 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:15:30.913 12:35:13 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:30.913 12:35:13 -- bdev/blockdev.sh@309 -- # local nbd_all 00:15:30.913 12:35:13 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:15:30.913 12:35:13 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:30.913 12:35:13 -- bdev/blockdev.sh@312 -- # local nbd_list 00:15:30.913 12:35:13 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:15:30.913 12:35:13 -- bdev/blockdev.sh@313 -- # local bdev_list 00:15:30.913 12:35:13 -- bdev/blockdev.sh@316 -- # nbd_pid=109641 00:15:30.913 12:35:13 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:30.913 12:35:13 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:30.913 12:35:13 -- bdev/blockdev.sh@318 -- # waitforlisten 109641 /var/tmp/spdk-nbd.sock 00:15:30.913 12:35:13 -- common/autotest_common.sh@819 -- # '[' -z 109641 ']' 00:15:30.913 12:35:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:30.913 12:35:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:30.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:30.913 12:35:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:30.913 12:35:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:30.913 12:35:13 -- common/autotest_common.sh@10 -- # set +x 00:15:30.913 [2024-10-01 12:35:13.155276] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:30.913 [2024-10-01 12:35:13.155425] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.913 [2024-10-01 12:35:13.321817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.171 [2024-10-01 12:35:13.481040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.429 [2024-10-01 12:35:13.779065] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:31.429 [2024-10-01 12:35:13.779355] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:31.429 [2024-10-01 12:35:13.787011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:31.429 [2024-10-01 12:35:13.787175] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:31.429 [2024-10-01 12:35:13.795007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:31.429 [2024-10-01 12:35:13.795142] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:31.429 [2024-10-01 12:35:13.795230] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:31.429 [2024-10-01 12:35:13.954393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:31.429 [2024-10-01 12:35:13.954669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.429 [2024-10-01 12:35:13.954791] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:31.429 [2024-10-01 12:35:13.954919] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.429 [2024-10-01 12:35:13.957119] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.429 [2024-10-01 12:35:13.957283] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:32.366 12:35:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:32.366 12:35:14 -- common/autotest_common.sh@852 -- # return 0 00:15:32.366 12:35:14 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@24 -- # local i 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:32.366 12:35:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:32.366 12:35:14 -- common/autotest_common.sh@857 -- # local i 00:15:32.366 12:35:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:32.366 12:35:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:32.366 12:35:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:32.366 12:35:14 -- common/autotest_common.sh@861 -- # break 00:15:32.366 12:35:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:32.366 12:35:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:32.366 12:35:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.366 1+0 records in 00:15:32.366 1+0 records out 00:15:32.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193376 s, 21.2 MB/s 00:15:32.366 12:35:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.366 12:35:14 -- common/autotest_common.sh@874 -- # size=4096 00:15:32.366 12:35:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.366 12:35:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:32.366 12:35:14 -- common/autotest_common.sh@877 -- # return 0 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:32.366 12:35:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:15:32.625 12:35:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:32.625 12:35:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:32.625 12:35:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:32.625 12:35:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:32.625 12:35:15 -- common/autotest_common.sh@857 -- # local i 00:15:32.625 12:35:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:32.625 12:35:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:32.625 12:35:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:32.625 12:35:15 -- common/autotest_common.sh@861 -- # break 00:15:32.625 12:35:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:32.625 12:35:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:32.625 12:35:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.625 1+0 records in 00:15:32.625 1+0 records out 00:15:32.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332344 s, 12.3 MB/s 00:15:32.625 12:35:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.625 12:35:15 -- common/autotest_common.sh@874 -- # size=4096 00:15:32.625 12:35:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.625 12:35:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:32.625 12:35:15 -- common/autotest_common.sh@877 -- # return 0 00:15:32.625 12:35:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:32.625 12:35:15 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:32.625 12:35:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:15:32.883 12:35:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:32.883 12:35:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:32.883 12:35:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:32.883 12:35:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:15:32.883 12:35:15 -- common/autotest_common.sh@857 -- # local i 00:15:32.883 12:35:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:32.883 12:35:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:32.883 12:35:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:15:32.883 12:35:15 -- common/autotest_common.sh@861 -- # break 00:15:32.883 12:35:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:32.883 12:35:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:32.883 12:35:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.883 1+0 records in 00:15:32.883 1+0 records out 00:15:32.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024953 s, 16.4 MB/s 00:15:32.883 12:35:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.883 12:35:15 -- common/autotest_common.sh@874 -- # size=4096 00:15:32.883 12:35:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.883 12:35:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:32.883 12:35:15 -- common/autotest_common.sh@877 -- # return 0 00:15:32.883 12:35:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:32.883 12:35:15 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:32.883 12:35:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:15:33.142 12:35:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:33.142 12:35:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:33.142 12:35:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:33.142 12:35:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:15:33.142 12:35:15 -- common/autotest_common.sh@857 -- # local i 00:15:33.142 12:35:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:33.142 12:35:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:33.142 12:35:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:15:33.142 12:35:15 -- common/autotest_common.sh@861 -- # break 00:15:33.142 12:35:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:33.142 12:35:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:33.142 12:35:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.142 1+0 records in 00:15:33.142 1+0 records out 00:15:33.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325158 s, 12.6 MB/s 00:15:33.142 12:35:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.142 12:35:15 -- common/autotest_common.sh@874 -- # size=4096 00:15:33.142 12:35:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.142 12:35:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:33.142 12:35:15 -- common/autotest_common.sh@877 -- # return 0 00:15:33.142 12:35:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.142 12:35:15 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:33.142 12:35:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:15:33.402 12:35:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:33.402 12:35:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:33.402 12:35:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:33.402 12:35:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:15:33.402 12:35:15 -- common/autotest_common.sh@857 -- # local i 00:15:33.402 12:35:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:33.402 12:35:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:33.402 12:35:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:15:33.402 12:35:15 -- common/autotest_common.sh@861 -- # break 00:15:33.402 12:35:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:33.402 12:35:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:33.402 12:35:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.402 1+0 records in 00:15:33.402 1+0 records out 00:15:33.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475294 s, 8.6 MB/s 00:15:33.402 12:35:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.402 12:35:15 -- common/autotest_common.sh@874 -- # size=4096 00:15:33.402 12:35:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.402 12:35:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:33.402 12:35:15 -- common/autotest_common.sh@877 -- # return 0 00:15:33.402 12:35:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.402 12:35:15 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:33.402 12:35:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:15:33.660 12:35:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:33.660 12:35:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:33.660 12:35:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:33.660 12:35:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:15:33.660 12:35:16 -- common/autotest_common.sh@857 -- # local i 00:15:33.660 12:35:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:33.660 12:35:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:33.660 12:35:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:15:33.660 12:35:16 -- common/autotest_common.sh@861 -- # break 00:15:33.660 12:35:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:33.660 12:35:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:33.660 12:35:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.660 1+0 records in 00:15:33.660 1+0 records out 00:15:33.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556492 s, 7.4 MB/s 00:15:33.660 12:35:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.660 12:35:16 -- common/autotest_common.sh@874 -- # size=4096 00:15:33.660 12:35:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.660 12:35:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:33.660 12:35:16 -- common/autotest_common.sh@877 -- # return 0 00:15:33.660 12:35:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.660 12:35:16 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:33.660 12:35:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:15:33.919 12:35:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:15:33.919 12:35:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:15:33.919 12:35:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:15:33.919 12:35:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:15:33.919 12:35:16 -- common/autotest_common.sh@857 -- # local i 00:15:33.919 12:35:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:33.919 12:35:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:33.919 12:35:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:15:33.919 12:35:16 -- common/autotest_common.sh@861 -- # break 00:15:33.919 12:35:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:33.919 12:35:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:33.919 12:35:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.919 1+0 records in 00:15:33.919 1+0 records out 00:15:33.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474724 s, 8.6 MB/s 00:15:33.919 12:35:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.919 12:35:16 -- common/autotest_common.sh@874 -- # size=4096 00:15:33.919 12:35:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.919 12:35:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:33.919 12:35:16 -- common/autotest_common.sh@877 -- # return 0 00:15:33.919 12:35:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.919 12:35:16 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:33.919 12:35:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:15:34.178 12:35:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:15:34.178 12:35:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:15:34.178 12:35:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:15:34.178 12:35:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:15:34.178 12:35:16 -- common/autotest_common.sh@857 -- # local i 00:15:34.178 12:35:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:34.178 12:35:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:34.178 12:35:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:15:34.178 12:35:16 -- common/autotest_common.sh@861 -- # break 00:15:34.178 12:35:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:34.178 12:35:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:34.178 12:35:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.178 1+0 records in 00:15:34.178 1+0 records out 00:15:34.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596431 s, 6.9 MB/s 00:15:34.178 12:35:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.178 12:35:16 -- common/autotest_common.sh@874 -- # size=4096 00:15:34.178 12:35:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.178 12:35:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:34.178 12:35:16 -- common/autotest_common.sh@877 -- # return 0 00:15:34.178 12:35:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.178 12:35:16 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:34.178 12:35:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:15:34.178 12:35:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:15:34.436 12:35:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:15:34.436 12:35:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:15:34.436 12:35:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:15:34.437 12:35:16 -- common/autotest_common.sh@857 -- # local i 00:15:34.437 12:35:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:34.437 12:35:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:34.437 12:35:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:15:34.437 12:35:16 -- common/autotest_common.sh@861 -- # break 00:15:34.437 12:35:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:34.437 12:35:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:34.437 12:35:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.437 1+0 records in 00:15:34.437 1+0 records out 00:15:34.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311537 s, 13.1 MB/s 00:15:34.437 12:35:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.437 12:35:16 -- common/autotest_common.sh@874 -- # size=4096 00:15:34.437 12:35:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.437 12:35:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:34.437 12:35:16 -- common/autotest_common.sh@877 -- # return 0 00:15:34.437 12:35:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.437 12:35:16 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:34.437 12:35:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:15:34.437 12:35:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:15:34.437 12:35:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:15:34.437 12:35:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:15:34.437 12:35:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:15:34.437 12:35:16 -- common/autotest_common.sh@857 -- # local i 00:15:34.437 12:35:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:34.437 12:35:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:34.437 12:35:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:15:34.437 12:35:16 -- common/autotest_common.sh@861 -- # break 00:15:34.437 12:35:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:34.437 12:35:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:34.437 12:35:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.437 1+0 records in 00:15:34.437 1+0 records out 00:15:34.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638267 s, 6.4 MB/s 00:15:34.437 12:35:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.437 12:35:16 -- common/autotest_common.sh@874 -- # size=4096 00:15:34.437 12:35:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.437 12:35:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:34.437 12:35:16 -- common/autotest_common.sh@877 -- # return 0 00:15:34.696 12:35:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.696 12:35:16 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:34.696 12:35:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:15:34.696 12:35:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:15:34.696 12:35:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:15:34.696 12:35:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:15:34.696 12:35:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:15:34.696 12:35:17 -- common/autotest_common.sh@857 -- # local i 00:15:34.696 12:35:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:34.696 12:35:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:34.696 12:35:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:15:34.696 12:35:17 -- common/autotest_common.sh@861 -- # break 00:15:34.696 12:35:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:34.696 12:35:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:34.696 12:35:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.696 1+0 records in 00:15:34.696 1+0 records out 00:15:34.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680725 s, 6.0 MB/s 00:15:34.696 12:35:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.696 12:35:17 -- common/autotest_common.sh@874 -- # size=4096 00:15:34.696 12:35:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.696 12:35:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:34.696 12:35:17 -- common/autotest_common.sh@877 -- # return 0 00:15:34.696 12:35:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.696 12:35:17 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:34.696 12:35:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:15:34.954 12:35:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:15:34.954 12:35:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:15:34.954 12:35:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:15:34.954 12:35:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:15:34.954 12:35:17 -- common/autotest_common.sh@857 -- # local i 00:15:34.954 12:35:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:34.954 12:35:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:34.954 12:35:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:15:34.954 12:35:17 -- common/autotest_common.sh@861 -- # break 00:15:34.954 12:35:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:34.954 12:35:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:34.954 12:35:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.954 1+0 records in 00:15:34.954 1+0 records out 00:15:34.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759042 s, 5.4 MB/s 00:15:34.954 12:35:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.954 12:35:17 -- common/autotest_common.sh@874 -- # size=4096 00:15:34.954 12:35:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.954 12:35:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:34.955 12:35:17 -- common/autotest_common.sh@877 -- # return 0 00:15:34.955 12:35:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.955 12:35:17 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:34.955 12:35:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:15:35.214 12:35:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:15:35.214 12:35:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:15:35.214 12:35:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:15:35.214 12:35:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:15:35.214 12:35:17 -- common/autotest_common.sh@857 -- # local i 00:15:35.214 12:35:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:35.214 12:35:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:35.214 12:35:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:15:35.214 12:35:17 -- common/autotest_common.sh@861 -- # break 00:15:35.214 12:35:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:35.214 12:35:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:35.214 12:35:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.214 1+0 records in 00:15:35.214 1+0 records out 00:15:35.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758413 s, 5.4 MB/s 00:15:35.214 12:35:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.214 12:35:17 -- common/autotest_common.sh@874 -- # size=4096 00:15:35.214 12:35:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.214 12:35:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:35.214 12:35:17 -- common/autotest_common.sh@877 -- # return 0 00:15:35.214 12:35:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:35.214 12:35:17 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:35.214 12:35:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:15:35.473 12:35:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:15:35.473 12:35:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:15:35.473 12:35:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:15:35.473 12:35:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:15:35.473 12:35:17 -- common/autotest_common.sh@857 -- # local i 00:15:35.473 12:35:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:35.473 12:35:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:35.473 12:35:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:15:35.473 12:35:17 -- common/autotest_common.sh@861 -- # break 00:15:35.473 12:35:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:35.473 12:35:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:35.473 12:35:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.473 1+0 records in 00:15:35.473 1+0 records out 00:15:35.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879921 s, 4.7 MB/s 00:15:35.473 12:35:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.473 12:35:17 -- common/autotest_common.sh@874 -- # size=4096 00:15:35.473 12:35:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.473 12:35:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:35.473 12:35:17 -- common/autotest_common.sh@877 -- # return 0 00:15:35.473 12:35:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:35.473 12:35:17 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:35.473 12:35:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:15:35.732 12:35:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:15:35.732 12:35:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:15:35.732 12:35:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:15:35.732 12:35:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:15:35.732 12:35:18 -- common/autotest_common.sh@857 -- # local i 00:15:35.732 12:35:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:35.732 12:35:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:35.732 12:35:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:15:35.732 12:35:18 -- common/autotest_common.sh@861 -- # break 00:15:35.732 12:35:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:35.732 12:35:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:35.732 12:35:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.732 1+0 records in 00:15:35.732 1+0 records out 00:15:35.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676131 s, 6.1 MB/s 00:15:35.732 12:35:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.732 12:35:18 -- common/autotest_common.sh@874 -- # size=4096 00:15:35.732 12:35:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.732 12:35:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:35.732 12:35:18 -- common/autotest_common.sh@877 -- # return 0 00:15:35.732 12:35:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:35.732 12:35:18 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:35.732 12:35:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:15:35.991 12:35:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:15:35.991 12:35:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:15:35.991 12:35:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:15:35.991 12:35:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:15:35.991 12:35:18 -- common/autotest_common.sh@857 -- # local i 00:15:35.991 12:35:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:35.991 12:35:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:35.991 12:35:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:15:35.991 12:35:18 -- common/autotest_common.sh@861 -- # break 00:15:35.991 12:35:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:35.991 12:35:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:35.991 12:35:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.991 1+0 records in 00:15:35.991 1+0 records out 00:15:35.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00216684 s, 1.9 MB/s 00:15:35.991 12:35:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.991 12:35:18 -- common/autotest_common.sh@874 -- # size=4096 00:15:35.991 12:35:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.991 12:35:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:35.991 12:35:18 -- common/autotest_common.sh@877 -- # return 0 00:15:35.991 12:35:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:35.991 12:35:18 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:15:35.991 12:35:18 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:36.250 12:35:18 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd0", 00:15:36.250 "bdev_name": "Malloc0" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd1", 00:15:36.250 "bdev_name": "Malloc1p0" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd2", 00:15:36.250 "bdev_name": "Malloc1p1" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd3", 00:15:36.250 "bdev_name": "Malloc2p0" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd4", 00:15:36.250 "bdev_name": "Malloc2p1" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd5", 00:15:36.250 "bdev_name": "Malloc2p2" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd6", 00:15:36.250 "bdev_name": "Malloc2p3" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd7", 00:15:36.250 "bdev_name": "Malloc2p4" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd8", 00:15:36.250 "bdev_name": "Malloc2p5" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd9", 00:15:36.250 "bdev_name": "Malloc2p6" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd10", 00:15:36.250 "bdev_name": "Malloc2p7" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd11", 00:15:36.250 "bdev_name": "TestPT" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd12", 00:15:36.250 "bdev_name": "raid0" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd13", 00:15:36.250 "bdev_name": "concat0" 00:15:36.250 }, 00:15:36.250 { 00:15:36.250 "nbd_device": "/dev/nbd14", 00:15:36.250 "bdev_name": "raid1" 00:15:36.250 }, 00:15:36.250 { 00:15:36.251 "nbd_device": "/dev/nbd15", 00:15:36.251 "bdev_name": "AIO0" 00:15:36.251 } 00:15:36.251 ]' 00:15:36.251 12:35:18 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:36.251 12:35:18 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:36.251 12:35:18 -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd0", 00:15:36.251 "bdev_name": "Malloc0" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd1", 00:15:36.251 "bdev_name": "Malloc1p0" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd2", 00:15:36.251 "bdev_name": "Malloc1p1" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd3", 00:15:36.251 "bdev_name": "Malloc2p0" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd4", 00:15:36.251 "bdev_name": "Malloc2p1" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd5", 00:15:36.251 "bdev_name": "Malloc2p2" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd6", 00:15:36.251 "bdev_name": "Malloc2p3" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd7", 00:15:36.251 "bdev_name": "Malloc2p4" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd8", 00:15:36.251 "bdev_name": "Malloc2p5" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd9", 00:15:36.251 "bdev_name": "Malloc2p6" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd10", 00:15:36.251 "bdev_name": "Malloc2p7" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd11", 00:15:36.251 "bdev_name": "TestPT" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd12", 00:15:36.251 "bdev_name": "raid0" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd13", 00:15:36.251 "bdev_name": "concat0" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd14", 00:15:36.251 "bdev_name": "raid1" 00:15:36.251 }, 00:15:36.251 { 00:15:36.251 "nbd_device": "/dev/nbd15", 00:15:36.251 "bdev_name": "AIO0" 00:15:36.251 } 00:15:36.251 ]' 00:15:36.251 12:35:18 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:15:36.251 12:35:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.251 12:35:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:15:36.251 12:35:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:36.251 12:35:18 -- bdev/nbd_common.sh@51 -- # local i 00:15:36.251 12:35:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.251 12:35:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:36.510 12:35:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:36.510 12:35:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:36.510 12:35:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:36.510 12:35:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.510 12:35:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.510 12:35:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:36.510 12:35:18 -- bdev/nbd_common.sh@41 -- # break 00:15:36.510 12:35:18 -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.510 12:35:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.510 12:35:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:36.510 12:35:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:36.510 12:35:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:36.510 12:35:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:36.510 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.510 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.510 12:35:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:36.510 12:35:19 -- bdev/nbd_common.sh@41 -- # break 00:15:36.510 12:35:19 -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.510 12:35:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.510 12:35:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:36.768 12:35:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:36.768 12:35:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:36.768 12:35:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:36.768 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.768 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.768 12:35:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:36.768 12:35:19 -- bdev/nbd_common.sh@41 -- # break 00:15:36.768 12:35:19 -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.768 12:35:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.768 12:35:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:37.027 12:35:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:37.027 12:35:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:37.027 12:35:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:37.027 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.027 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.027 12:35:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:37.027 12:35:19 -- bdev/nbd_common.sh@41 -- # break 00:15:37.027 12:35:19 -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.027 12:35:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.027 12:35:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@41 -- # break 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@41 -- # break 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.286 12:35:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:15:37.545 12:35:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:15:37.545 12:35:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:15:37.545 12:35:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:15:37.545 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.545 12:35:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.545 12:35:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:15:37.545 12:35:19 -- bdev/nbd_common.sh@41 -- # break 00:15:37.545 12:35:19 -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.545 12:35:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.545 12:35:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@41 -- # break 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@41 -- # break 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.863 12:35:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:15:38.141 12:35:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:15:38.141 12:35:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:15:38.141 12:35:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:15:38.141 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.141 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.141 12:35:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:15:38.141 12:35:20 -- bdev/nbd_common.sh@41 -- # break 00:15:38.141 12:35:20 -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.141 12:35:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.141 12:35:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@41 -- # break 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.400 12:35:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:38.658 12:35:20 -- bdev/nbd_common.sh@41 -- # break 00:15:38.658 12:35:20 -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.658 12:35:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.658 12:35:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:38.658 12:35:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:38.658 12:35:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:38.658 12:35:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:38.658 12:35:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.658 12:35:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.658 12:35:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:38.658 12:35:21 -- bdev/nbd_common.sh@41 -- # break 00:15:38.658 12:35:21 -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.658 12:35:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.658 12:35:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:38.917 12:35:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:38.917 12:35:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:38.917 12:35:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:38.917 12:35:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.917 12:35:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.917 12:35:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:38.917 12:35:21 -- bdev/nbd_common.sh@41 -- # break 00:15:38.917 12:35:21 -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.917 12:35:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.917 12:35:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@41 -- # break 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@41 -- # break 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.231 12:35:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@65 -- # true 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@65 -- # count=0 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@122 -- # count=0 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@127 -- # return 0 00:15:39.494 12:35:21 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@12 -- # local i 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:39.494 12:35:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:39.752 /dev/nbd0 00:15:39.752 12:35:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:39.752 12:35:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:39.752 12:35:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:39.752 12:35:22 -- common/autotest_common.sh@857 -- # local i 00:15:39.752 12:35:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:39.752 12:35:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:39.752 12:35:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:39.752 12:35:22 -- common/autotest_common.sh@861 -- # break 00:15:39.752 12:35:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:39.752 12:35:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:39.752 12:35:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.752 1+0 records in 00:15:39.752 1+0 records out 00:15:39.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113802 s, 3.6 MB/s 00:15:39.752 12:35:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.752 12:35:22 -- common/autotest_common.sh@874 -- # size=4096 00:15:39.752 12:35:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.752 12:35:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:39.752 12:35:22 -- common/autotest_common.sh@877 -- # return 0 00:15:39.752 12:35:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.752 12:35:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:39.752 12:35:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:15:40.011 /dev/nbd1 00:15:40.011 12:35:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:40.011 12:35:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:40.011 12:35:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:40.011 12:35:22 -- common/autotest_common.sh@857 -- # local i 00:15:40.011 12:35:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:40.011 12:35:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:40.011 12:35:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:40.011 12:35:22 -- common/autotest_common.sh@861 -- # break 00:15:40.011 12:35:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:40.011 12:35:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:40.011 12:35:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.011 1+0 records in 00:15:40.011 1+0 records out 00:15:40.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281999 s, 14.5 MB/s 00:15:40.011 12:35:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.011 12:35:22 -- common/autotest_common.sh@874 -- # size=4096 00:15:40.011 12:35:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.011 12:35:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:40.011 12:35:22 -- common/autotest_common.sh@877 -- # return 0 00:15:40.011 12:35:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.011 12:35:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:40.011 12:35:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:15:40.270 /dev/nbd10 00:15:40.270 12:35:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:40.270 12:35:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:40.270 12:35:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:15:40.270 12:35:22 -- common/autotest_common.sh@857 -- # local i 00:15:40.270 12:35:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:40.270 12:35:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:40.270 12:35:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:15:40.270 12:35:22 -- common/autotest_common.sh@861 -- # break 00:15:40.270 12:35:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:40.270 12:35:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:40.270 12:35:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.270 1+0 records in 00:15:40.270 1+0 records out 00:15:40.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535345 s, 7.7 MB/s 00:15:40.270 12:35:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.270 12:35:22 -- common/autotest_common.sh@874 -- # size=4096 00:15:40.270 12:35:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.270 12:35:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:40.270 12:35:22 -- common/autotest_common.sh@877 -- # return 0 00:15:40.270 12:35:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.270 12:35:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:40.270 12:35:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:15:40.529 /dev/nbd11 00:15:40.529 12:35:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:40.529 12:35:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:40.529 12:35:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:15:40.529 12:35:22 -- common/autotest_common.sh@857 -- # local i 00:15:40.529 12:35:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:40.529 12:35:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:40.529 12:35:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:15:40.529 12:35:22 -- common/autotest_common.sh@861 -- # break 00:15:40.529 12:35:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:40.529 12:35:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:40.529 12:35:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.529 1+0 records in 00:15:40.529 1+0 records out 00:15:40.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039544 s, 10.4 MB/s 00:15:40.529 12:35:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.529 12:35:22 -- common/autotest_common.sh@874 -- # size=4096 00:15:40.529 12:35:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.529 12:35:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:40.529 12:35:22 -- common/autotest_common.sh@877 -- # return 0 00:15:40.529 12:35:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.529 12:35:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:40.529 12:35:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:15:40.788 /dev/nbd12 00:15:40.788 12:35:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:40.788 12:35:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:40.788 12:35:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:15:40.788 12:35:23 -- common/autotest_common.sh@857 -- # local i 00:15:40.788 12:35:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:40.788 12:35:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:40.788 12:35:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:15:40.788 12:35:23 -- common/autotest_common.sh@861 -- # break 00:15:40.788 12:35:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:40.788 12:35:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:40.788 12:35:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.788 1+0 records in 00:15:40.788 1+0 records out 00:15:40.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608272 s, 6.7 MB/s 00:15:40.788 12:35:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.788 12:35:23 -- common/autotest_common.sh@874 -- # size=4096 00:15:40.788 12:35:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.788 12:35:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:40.788 12:35:23 -- common/autotest_common.sh@877 -- # return 0 00:15:40.788 12:35:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.788 12:35:23 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:40.788 12:35:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:15:41.047 /dev/nbd13 00:15:41.047 12:35:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:41.047 12:35:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:41.047 12:35:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:15:41.047 12:35:23 -- common/autotest_common.sh@857 -- # local i 00:15:41.047 12:35:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:41.047 12:35:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:41.047 12:35:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:15:41.047 12:35:23 -- common/autotest_common.sh@861 -- # break 00:15:41.047 12:35:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:41.047 12:35:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:41.047 12:35:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.047 1+0 records in 00:15:41.047 1+0 records out 00:15:41.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107572 s, 3.8 MB/s 00:15:41.047 12:35:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.047 12:35:23 -- common/autotest_common.sh@874 -- # size=4096 00:15:41.047 12:35:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.047 12:35:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:41.047 12:35:23 -- common/autotest_common.sh@877 -- # return 0 00:15:41.047 12:35:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.047 12:35:23 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:41.047 12:35:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:15:41.047 /dev/nbd14 00:15:41.047 12:35:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:15:41.306 12:35:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:15:41.306 12:35:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:15:41.306 12:35:23 -- common/autotest_common.sh@857 -- # local i 00:15:41.306 12:35:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:41.306 12:35:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:41.306 12:35:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:15:41.306 12:35:23 -- common/autotest_common.sh@861 -- # break 00:15:41.306 12:35:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:41.306 12:35:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:41.307 12:35:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.307 1+0 records in 00:15:41.307 1+0 records out 00:15:41.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724059 s, 5.7 MB/s 00:15:41.307 12:35:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.307 12:35:23 -- common/autotest_common.sh@874 -- # size=4096 00:15:41.307 12:35:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.307 12:35:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:41.307 12:35:23 -- common/autotest_common.sh@877 -- # return 0 00:15:41.307 12:35:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.307 12:35:23 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:41.307 12:35:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:15:41.307 /dev/nbd15 00:15:41.307 12:35:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:15:41.307 12:35:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:15:41.307 12:35:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:15:41.307 12:35:23 -- common/autotest_common.sh@857 -- # local i 00:15:41.307 12:35:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:41.307 12:35:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:41.307 12:35:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:15:41.307 12:35:23 -- common/autotest_common.sh@861 -- # break 00:15:41.307 12:35:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:41.307 12:35:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:41.307 12:35:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.307 1+0 records in 00:15:41.307 1+0 records out 00:15:41.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580528 s, 7.1 MB/s 00:15:41.307 12:35:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.307 12:35:23 -- common/autotest_common.sh@874 -- # size=4096 00:15:41.307 12:35:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.566 12:35:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:41.566 12:35:23 -- common/autotest_common.sh@877 -- # return 0 00:15:41.566 12:35:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.566 12:35:23 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:41.566 12:35:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:15:41.566 /dev/nbd2 00:15:41.566 12:35:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:15:41.566 12:35:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:15:41.566 12:35:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:15:41.566 12:35:24 -- common/autotest_common.sh@857 -- # local i 00:15:41.566 12:35:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:41.566 12:35:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:41.566 12:35:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:15:41.566 12:35:24 -- common/autotest_common.sh@861 -- # break 00:15:41.566 12:35:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:41.566 12:35:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:41.566 12:35:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.566 1+0 records in 00:15:41.566 1+0 records out 00:15:41.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554785 s, 7.4 MB/s 00:15:41.566 12:35:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.566 12:35:24 -- common/autotest_common.sh@874 -- # size=4096 00:15:41.566 12:35:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.566 12:35:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:41.566 12:35:24 -- common/autotest_common.sh@877 -- # return 0 00:15:41.566 12:35:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.566 12:35:24 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:41.566 12:35:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:15:41.825 /dev/nbd3 00:15:41.825 12:35:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:15:41.825 12:35:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:15:41.825 12:35:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:15:41.825 12:35:24 -- common/autotest_common.sh@857 -- # local i 00:15:41.825 12:35:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:41.825 12:35:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:41.825 12:35:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:15:41.825 12:35:24 -- common/autotest_common.sh@861 -- # break 00:15:41.825 12:35:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:41.825 12:35:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:41.825 12:35:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.825 1+0 records in 00:15:41.825 1+0 records out 00:15:41.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597907 s, 6.9 MB/s 00:15:41.825 12:35:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.825 12:35:24 -- common/autotest_common.sh@874 -- # size=4096 00:15:41.825 12:35:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.825 12:35:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:41.825 12:35:24 -- common/autotest_common.sh@877 -- # return 0 00:15:41.825 12:35:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.825 12:35:24 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:41.825 12:35:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:15:42.084 /dev/nbd4 00:15:42.084 12:35:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:15:42.084 12:35:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:15:42.084 12:35:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:15:42.084 12:35:24 -- common/autotest_common.sh@857 -- # local i 00:15:42.084 12:35:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:42.084 12:35:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:42.084 12:35:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:15:42.084 12:35:24 -- common/autotest_common.sh@861 -- # break 00:15:42.084 12:35:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:42.084 12:35:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:42.084 12:35:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.084 1+0 records in 00:15:42.084 1+0 records out 00:15:42.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642433 s, 6.4 MB/s 00:15:42.084 12:35:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.084 12:35:24 -- common/autotest_common.sh@874 -- # size=4096 00:15:42.084 12:35:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.084 12:35:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:42.084 12:35:24 -- common/autotest_common.sh@877 -- # return 0 00:15:42.084 12:35:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.084 12:35:24 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:42.084 12:35:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:15:42.343 /dev/nbd5 00:15:42.343 12:35:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:15:42.343 12:35:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:15:42.343 12:35:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:15:42.343 12:35:24 -- common/autotest_common.sh@857 -- # local i 00:15:42.343 12:35:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:42.343 12:35:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:42.343 12:35:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:15:42.343 12:35:24 -- common/autotest_common.sh@861 -- # break 00:15:42.343 12:35:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:42.343 12:35:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:42.343 12:35:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.343 1+0 records in 00:15:42.343 1+0 records out 00:15:42.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100756 s, 4.1 MB/s 00:15:42.343 12:35:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.343 12:35:24 -- common/autotest_common.sh@874 -- # size=4096 00:15:42.343 12:35:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.343 12:35:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:42.343 12:35:24 -- common/autotest_common.sh@877 -- # return 0 00:15:42.343 12:35:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.343 12:35:24 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:42.343 12:35:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:15:42.602 /dev/nbd6 00:15:42.602 12:35:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:15:42.602 12:35:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:15:42.602 12:35:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:15:42.602 12:35:24 -- common/autotest_common.sh@857 -- # local i 00:15:42.602 12:35:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:42.602 12:35:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:42.602 12:35:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:15:42.602 12:35:24 -- common/autotest_common.sh@861 -- # break 00:15:42.602 12:35:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:42.602 12:35:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:42.602 12:35:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.602 1+0 records in 00:15:42.602 1+0 records out 00:15:42.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858911 s, 4.8 MB/s 00:15:42.602 12:35:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.602 12:35:24 -- common/autotest_common.sh@874 -- # size=4096 00:15:42.602 12:35:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.602 12:35:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:42.602 12:35:24 -- common/autotest_common.sh@877 -- # return 0 00:15:42.602 12:35:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.602 12:35:24 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:42.602 12:35:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:15:42.861 /dev/nbd7 00:15:42.861 12:35:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:15:42.861 12:35:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:15:42.861 12:35:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:15:42.861 12:35:25 -- common/autotest_common.sh@857 -- # local i 00:15:42.861 12:35:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:42.861 12:35:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:42.861 12:35:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:15:42.861 12:35:25 -- common/autotest_common.sh@861 -- # break 00:15:42.861 12:35:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:42.861 12:35:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:42.861 12:35:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.861 1+0 records in 00:15:42.861 1+0 records out 00:15:42.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071368 s, 5.7 MB/s 00:15:42.861 12:35:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.861 12:35:25 -- common/autotest_common.sh@874 -- # size=4096 00:15:42.861 12:35:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.861 12:35:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:42.861 12:35:25 -- common/autotest_common.sh@877 -- # return 0 00:15:42.861 12:35:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.861 12:35:25 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:42.861 12:35:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:15:43.122 /dev/nbd8 00:15:43.122 12:35:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:15:43.122 12:35:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:15:43.122 12:35:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:15:43.122 12:35:25 -- common/autotest_common.sh@857 -- # local i 00:15:43.122 12:35:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:43.122 12:35:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:43.122 12:35:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:15:43.122 12:35:25 -- common/autotest_common.sh@861 -- # break 00:15:43.122 12:35:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:43.122 12:35:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:43.122 12:35:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.122 1+0 records in 00:15:43.122 1+0 records out 00:15:43.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776326 s, 5.3 MB/s 00:15:43.122 12:35:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.122 12:35:25 -- common/autotest_common.sh@874 -- # size=4096 00:15:43.122 12:35:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.122 12:35:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:43.122 12:35:25 -- common/autotest_common.sh@877 -- # return 0 00:15:43.122 12:35:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.122 12:35:25 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:43.122 12:35:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:15:43.122 /dev/nbd9 00:15:43.383 12:35:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:15:43.383 12:35:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:15:43.383 12:35:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:15:43.383 12:35:25 -- common/autotest_common.sh@857 -- # local i 00:15:43.383 12:35:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:43.383 12:35:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:43.383 12:35:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:15:43.383 12:35:25 -- common/autotest_common.sh@861 -- # break 00:15:43.383 12:35:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:43.383 12:35:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:43.383 12:35:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.383 1+0 records in 00:15:43.383 1+0 records out 00:15:43.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121788 s, 3.4 MB/s 00:15:43.383 12:35:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.383 12:35:25 -- common/autotest_common.sh@874 -- # size=4096 00:15:43.383 12:35:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.383 12:35:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:43.383 12:35:25 -- common/autotest_common.sh@877 -- # return 0 00:15:43.383 12:35:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.383 12:35:25 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:15:43.383 12:35:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:43.383 12:35:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.383 12:35:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:43.383 12:35:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:43.383 { 00:15:43.383 "nbd_device": "/dev/nbd0", 00:15:43.383 "bdev_name": "Malloc0" 00:15:43.383 }, 00:15:43.383 { 00:15:43.383 "nbd_device": "/dev/nbd1", 00:15:43.383 "bdev_name": "Malloc1p0" 00:15:43.383 }, 00:15:43.383 { 00:15:43.383 "nbd_device": "/dev/nbd10", 00:15:43.383 "bdev_name": "Malloc1p1" 00:15:43.383 }, 00:15:43.383 { 00:15:43.384 "nbd_device": "/dev/nbd11", 00:15:43.384 "bdev_name": "Malloc2p0" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd12", 00:15:43.384 "bdev_name": "Malloc2p1" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd13", 00:15:43.384 "bdev_name": "Malloc2p2" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd14", 00:15:43.384 "bdev_name": "Malloc2p3" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd15", 00:15:43.384 "bdev_name": "Malloc2p4" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd2", 00:15:43.384 "bdev_name": "Malloc2p5" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd3", 00:15:43.384 "bdev_name": "Malloc2p6" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd4", 00:15:43.384 "bdev_name": "Malloc2p7" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd5", 00:15:43.384 "bdev_name": "TestPT" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd6", 00:15:43.384 "bdev_name": "raid0" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd7", 00:15:43.384 "bdev_name": "concat0" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd8", 00:15:43.384 "bdev_name": "raid1" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd9", 00:15:43.384 "bdev_name": "AIO0" 00:15:43.384 } 00:15:43.384 ]' 00:15:43.384 12:35:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd0", 00:15:43.384 "bdev_name": "Malloc0" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd1", 00:15:43.384 "bdev_name": "Malloc1p0" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd10", 00:15:43.384 "bdev_name": "Malloc1p1" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd11", 00:15:43.384 "bdev_name": "Malloc2p0" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd12", 00:15:43.384 "bdev_name": "Malloc2p1" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd13", 00:15:43.384 "bdev_name": "Malloc2p2" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd14", 00:15:43.384 "bdev_name": "Malloc2p3" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd15", 00:15:43.384 "bdev_name": "Malloc2p4" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd2", 00:15:43.384 "bdev_name": "Malloc2p5" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd3", 00:15:43.384 "bdev_name": "Malloc2p6" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd4", 00:15:43.384 "bdev_name": "Malloc2p7" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd5", 00:15:43.384 "bdev_name": "TestPT" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd6", 00:15:43.384 "bdev_name": "raid0" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd7", 00:15:43.384 "bdev_name": "concat0" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd8", 00:15:43.384 "bdev_name": "raid1" 00:15:43.384 }, 00:15:43.384 { 00:15:43.384 "nbd_device": "/dev/nbd9", 00:15:43.384 "bdev_name": "AIO0" 00:15:43.384 } 00:15:43.384 ]' 00:15:43.384 12:35:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:43.644 /dev/nbd1 00:15:43.644 /dev/nbd10 00:15:43.644 /dev/nbd11 00:15:43.644 /dev/nbd12 00:15:43.644 /dev/nbd13 00:15:43.644 /dev/nbd14 00:15:43.644 /dev/nbd15 00:15:43.644 /dev/nbd2 00:15:43.644 /dev/nbd3 00:15:43.644 /dev/nbd4 00:15:43.644 /dev/nbd5 00:15:43.644 /dev/nbd6 00:15:43.644 /dev/nbd7 00:15:43.644 /dev/nbd8 00:15:43.644 /dev/nbd9' 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:43.644 /dev/nbd1 00:15:43.644 /dev/nbd10 00:15:43.644 /dev/nbd11 00:15:43.644 /dev/nbd12 00:15:43.644 /dev/nbd13 00:15:43.644 /dev/nbd14 00:15:43.644 /dev/nbd15 00:15:43.644 /dev/nbd2 00:15:43.644 /dev/nbd3 00:15:43.644 /dev/nbd4 00:15:43.644 /dev/nbd5 00:15:43.644 /dev/nbd6 00:15:43.644 /dev/nbd7 00:15:43.644 /dev/nbd8 00:15:43.644 /dev/nbd9' 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@65 -- # count=16 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@66 -- # echo 16 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@95 -- # count=16 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:43.644 12:35:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:43.645 12:35:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:43.645 12:35:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:43.645 12:35:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:43.645 256+0 records in 00:15:43.645 256+0 records out 00:15:43.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131082 s, 80.0 MB/s 00:15:43.645 12:35:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:43.645 12:35:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:43.645 256+0 records in 00:15:43.645 256+0 records out 00:15:43.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145237 s, 7.2 MB/s 00:15:43.645 12:35:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:43.645 12:35:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:43.903 256+0 records in 00:15:43.903 256+0 records out 00:15:43.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150579 s, 7.0 MB/s 00:15:43.903 12:35:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:43.903 12:35:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:43.903 256+0 records in 00:15:43.903 256+0 records out 00:15:43.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147808 s, 7.1 MB/s 00:15:43.903 12:35:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:43.903 12:35:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:44.162 256+0 records in 00:15:44.162 256+0 records out 00:15:44.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149174 s, 7.0 MB/s 00:15:44.162 12:35:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.162 12:35:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:44.422 256+0 records in 00:15:44.422 256+0 records out 00:15:44.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14802 s, 7.1 MB/s 00:15:44.422 12:35:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.422 12:35:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:44.422 256+0 records in 00:15:44.422 256+0 records out 00:15:44.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150155 s, 7.0 MB/s 00:15:44.422 12:35:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.422 12:35:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:15:44.681 256+0 records in 00:15:44.681 256+0 records out 00:15:44.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1484 s, 7.1 MB/s 00:15:44.681 12:35:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.681 12:35:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:15:44.681 256+0 records in 00:15:44.681 256+0 records out 00:15:44.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147149 s, 7.1 MB/s 00:15:44.681 12:35:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.681 12:35:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:15:44.940 256+0 records in 00:15:44.940 256+0 records out 00:15:44.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147179 s, 7.1 MB/s 00:15:44.941 12:35:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.941 12:35:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:15:45.199 256+0 records in 00:15:45.199 256+0 records out 00:15:45.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148904 s, 7.0 MB/s 00:15:45.199 12:35:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.199 12:35:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:15:45.199 256+0 records in 00:15:45.199 256+0 records out 00:15:45.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14772 s, 7.1 MB/s 00:15:45.199 12:35:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.199 12:35:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:15:45.458 256+0 records in 00:15:45.459 256+0 records out 00:15:45.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148016 s, 7.1 MB/s 00:15:45.459 12:35:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.459 12:35:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:15:45.459 256+0 records in 00:15:45.459 256+0 records out 00:15:45.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151156 s, 6.9 MB/s 00:15:45.459 12:35:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.459 12:35:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:15:45.718 256+0 records in 00:15:45.718 256+0 records out 00:15:45.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149575 s, 7.0 MB/s 00:15:45.718 12:35:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.718 12:35:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:15:45.977 256+0 records in 00:15:45.977 256+0 records out 00:15:45.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15038 s, 7.0 MB/s 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:15:45.977 256+0 records in 00:15:45.977 256+0 records out 00:15:45.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220612 s, 4.8 MB/s 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:45.977 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:15:46.236 12:35:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.237 12:35:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:15:46.237 12:35:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:46.237 12:35:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:15:46.237 12:35:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.237 12:35:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:46.237 12:35:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:46.237 12:35:28 -- bdev/nbd_common.sh@51 -- # local i 00:15:46.237 12:35:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.237 12:35:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:46.495 12:35:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.495 12:35:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.495 12:35:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.495 12:35:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.495 12:35:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.495 12:35:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.495 12:35:28 -- bdev/nbd_common.sh@41 -- # break 00:15:46.495 12:35:28 -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.495 12:35:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.495 12:35:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@41 -- # break 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@41 -- # break 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.753 12:35:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:47.011 12:35:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:47.012 12:35:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:47.012 12:35:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:47.012 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.012 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.012 12:35:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:47.012 12:35:29 -- bdev/nbd_common.sh@41 -- # break 00:15:47.012 12:35:29 -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.012 12:35:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.012 12:35:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@41 -- # break 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@41 -- # break 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.270 12:35:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:15:47.529 12:35:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:15:47.529 12:35:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:15:47.529 12:35:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:15:47.529 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.529 12:35:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.529 12:35:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:15:47.529 12:35:29 -- bdev/nbd_common.sh@41 -- # break 00:15:47.529 12:35:29 -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.529 12:35:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.529 12:35:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:15:47.804 12:35:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:15:47.804 12:35:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:15:47.804 12:35:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:15:47.804 12:35:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.804 12:35:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.804 12:35:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:15:47.804 12:35:30 -- bdev/nbd_common.sh@41 -- # break 00:15:47.804 12:35:30 -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.804 12:35:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.804 12:35:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@41 -- # break 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@41 -- # break 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.104 12:35:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:48.362 12:35:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:48.362 12:35:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:48.362 12:35:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:48.362 12:35:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.362 12:35:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.362 12:35:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:48.362 12:35:30 -- bdev/nbd_common.sh@41 -- # break 00:15:48.362 12:35:30 -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.362 12:35:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.362 12:35:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:48.621 12:35:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:48.621 12:35:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:48.621 12:35:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:48.621 12:35:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.621 12:35:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.621 12:35:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:48.621 12:35:30 -- bdev/nbd_common.sh@41 -- # break 00:15:48.621 12:35:30 -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.621 12:35:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.621 12:35:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:15:48.621 12:35:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:15:48.621 12:35:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:15:48.622 12:35:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:15:48.622 12:35:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.622 12:35:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.622 12:35:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:15:48.622 12:35:31 -- bdev/nbd_common.sh@41 -- # break 00:15:48.622 12:35:31 -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.622 12:35:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.622 12:35:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:15:48.880 12:35:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:15:48.880 12:35:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:15:48.880 12:35:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:15:48.880 12:35:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.880 12:35:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.880 12:35:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:15:48.880 12:35:31 -- bdev/nbd_common.sh@41 -- # break 00:15:48.880 12:35:31 -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.880 12:35:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.880 12:35:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:15:49.139 12:35:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:15:49.139 12:35:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:15:49.139 12:35:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:15:49.139 12:35:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.139 12:35:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.139 12:35:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:15:49.139 12:35:31 -- bdev/nbd_common.sh@41 -- # break 00:15:49.139 12:35:31 -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.139 12:35:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.139 12:35:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@41 -- # break 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:49.398 12:35:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@65 -- # true 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@65 -- # count=0 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@104 -- # count=0 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@109 -- # return 0 00:15:49.657 12:35:31 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:49.657 12:35:31 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:49.657 malloc_lvol_verify 00:15:49.657 12:35:32 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:49.916 76cae954-6092-4c1f-b2e9-50893aa17d6b 00:15:49.916 12:35:32 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:50.175 4a3908b6-b20a-4b7a-ad44-21dde03b8600 00:15:50.175 12:35:32 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:50.175 /dev/nbd0 00:15:50.175 12:35:32 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:50.175 mke2fs 1.46.5 (30-Dec-2021) 00:15:50.175 00:15:50.175 Filesystem too small for a journal 00:15:50.175 Discarding device blocks: 0/1024 done 00:15:50.175 Creating filesystem with 1024 4k blocks and 1024 inodes 00:15:50.175 00:15:50.175 Allocating group tables: 0/1 done 00:15:50.175 Writing inode tables: 0/1 done 00:15:50.175 Writing superblocks and filesystem accounting information: 0/1 done 00:15:50.175 00:15:50.175 12:35:32 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:50.175 12:35:32 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:50.175 12:35:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.175 12:35:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:50.175 12:35:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.175 12:35:32 -- bdev/nbd_common.sh@51 -- # local i 00:15:50.175 12:35:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.175 12:35:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:50.434 12:35:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:50.434 12:35:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:50.434 12:35:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:50.434 12:35:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.434 12:35:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.434 12:35:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:50.434 12:35:32 -- bdev/nbd_common.sh@41 -- # break 00:15:50.434 12:35:32 -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.434 12:35:32 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:50.434 12:35:32 -- bdev/nbd_common.sh@147 -- # return 0 00:15:50.434 12:35:32 -- bdev/blockdev.sh@324 -- # killprocess 109641 00:15:50.434 12:35:32 -- common/autotest_common.sh@926 -- # '[' -z 109641 ']' 00:15:50.434 12:35:32 -- common/autotest_common.sh@930 -- # kill -0 109641 00:15:50.434 12:35:32 -- common/autotest_common.sh@931 -- # uname 00:15:50.434 12:35:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:50.435 12:35:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 109641 00:15:50.435 12:35:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:50.435 12:35:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:50.435 killing process with pid 109641 00:15:50.435 12:35:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 109641' 00:15:50.435 12:35:32 -- common/autotest_common.sh@945 -- # kill 109641 00:15:50.435 12:35:32 -- common/autotest_common.sh@950 -- # wait 109641 00:15:52.341 12:35:34 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:15:52.341 00:15:52.341 real 0m21.794s 00:15:52.341 user 0m26.683s 00:15:52.341 sys 0m9.674s 00:15:52.341 12:35:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.341 12:35:34 -- common/autotest_common.sh@10 -- # set +x 00:15:52.341 ************************************ 00:15:52.341 END TEST bdev_nbd 00:15:52.341 ************************************ 00:15:52.600 12:35:34 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:15:52.600 12:35:34 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:15:52.600 12:35:34 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:15:52.600 12:35:34 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:15:52.600 12:35:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:52.601 12:35:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:52.601 12:35:34 -- common/autotest_common.sh@10 -- # set +x 00:15:52.601 ************************************ 00:15:52.601 START TEST bdev_fio 00:15:52.601 ************************************ 00:15:52.601 12:35:34 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:15:52.601 12:35:34 -- bdev/blockdev.sh@329 -- # local env_context 00:15:52.601 12:35:34 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:52.601 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:52.601 12:35:34 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:52.601 12:35:34 -- bdev/blockdev.sh@337 -- # echo '' 00:15:52.601 12:35:34 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:15:52.601 12:35:34 -- bdev/blockdev.sh@337 -- # env_context= 00:15:52.601 12:35:34 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:52.601 12:35:34 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:52.601 12:35:34 -- common/autotest_common.sh@1260 -- # local workload=verify 00:15:52.601 12:35:34 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:15:52.601 12:35:34 -- common/autotest_common.sh@1262 -- # local env_context= 00:15:52.601 12:35:34 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:15:52.601 12:35:34 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:52.601 12:35:34 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:15:52.601 12:35:34 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:15:52.601 12:35:34 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:52.601 12:35:34 -- common/autotest_common.sh@1280 -- # cat 00:15:52.601 12:35:34 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:15:52.601 12:35:34 -- common/autotest_common.sh@1293 -- # cat 00:15:52.601 12:35:34 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:15:52.601 12:35:34 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:15:52.601 12:35:35 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:52.601 12:35:35 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:15:52.601 12:35:35 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:52.601 12:35:35 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:15:52.601 12:35:35 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:52.601 12:35:35 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:52.601 12:35:35 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:15:52.601 12:35:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:52.601 12:35:35 -- common/autotest_common.sh@10 -- # set +x 00:15:52.601 ************************************ 00:15:52.601 START TEST bdev_fio_rw_verify 00:15:52.601 ************************************ 00:15:52.601 12:35:35 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:52.601 12:35:35 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:52.601 12:35:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:52.601 12:35:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:52.601 12:35:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:52.601 12:35:35 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:52.601 12:35:35 -- common/autotest_common.sh@1320 -- # shift 00:15:52.601 12:35:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:52.601 12:35:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:52.601 12:35:35 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:52.601 12:35:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:52.601 12:35:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:52.861 12:35:35 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:15:52.861 12:35:35 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:15:52.861 12:35:35 -- common/autotest_common.sh@1326 -- # break 00:15:52.861 12:35:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:52.861 12:35:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:52.861 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.861 fio-3.35 00:15:52.861 Starting 16 threads 00:16:05.090 00:16:05.090 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=110769: Tue Oct 1 12:35:46 2024 00:16:05.090 read: IOPS=81.8k, BW=319MiB/s (335MB/s)(3201MiB/10024msec) 00:16:05.090 slat (nsec): min=1759, max=72469k, avg=36777.75, stdev=443916.39 00:16:05.090 clat (usec): min=6, max=72712, avg=297.10, stdev=1293.44 00:16:05.090 lat (usec): min=18, max=72727, avg=333.88, stdev=1367.05 00:16:05.090 clat percentiles (usec): 00:16:05.090 | 50.000th=[ 176], 99.000th=[ 709], 99.900th=[16319], 99.990th=[31065], 00:16:05.090 | 99.999th=[52691] 00:16:05.090 write: IOPS=129k, BW=503MiB/s (527MB/s)(4966MiB/9876msec); 0 zone resets 00:16:05.090 slat (usec): min=3, max=50869, avg=60.04, stdev=635.64 00:16:05.090 clat (usec): min=8, max=51209, avg=366.68, stdev=1525.66 00:16:05.090 lat (usec): min=28, max=51256, avg=426.72, stdev=1652.06 00:16:05.090 clat percentiles (usec): 00:16:05.090 | 50.000th=[ 212], 99.000th=[ 4752], 99.900th=[19268], 99.990th=[36439], 00:16:05.090 | 99.999th=[48497] 00:16:05.090 bw ( KiB/s): min=312404, max=798968, per=98.61%, avg=507784.93, stdev=8830.00, samples=305 00:16:05.090 iops : min=78101, max=199742, avg=126945.97, stdev=2207.51, samples=305 00:16:05.090 lat (usec) : 10=0.01%, 20=0.01%, 50=0.93%, 100=10.52%, 250=57.56% 00:16:05.090 lat (usec) : 500=28.11%, 750=1.65%, 1000=0.16% 00:16:05.090 lat (msec) : 2=0.06%, 4=0.06%, 10=0.26%, 20=0.59%, 50=0.09% 00:16:05.090 lat (msec) : 100=0.01% 00:16:05.090 cpu : usr=57.09%, sys=1.93%, ctx=289399, majf=2, minf=88159 00:16:05.090 IO depths : 1=11.0%, 2=23.3%, 4=52.4%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.090 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.090 issued rwts: total=819539,1271360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.090 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:05.090 00:16:05.090 Run status group 0 (all jobs): 00:16:05.090 READ: bw=319MiB/s (335MB/s), 319MiB/s-319MiB/s (335MB/s-335MB/s), io=3201MiB (3357MB), run=10024-10024msec 00:16:05.090 WRITE: bw=503MiB/s (527MB/s), 503MiB/s-503MiB/s (527MB/s-527MB/s), io=4966MiB (5207MB), run=9876-9876msec 00:16:06.995 ----------------------------------------------------- 00:16:06.995 Suppressions used: 00:16:06.995 count bytes template 00:16:06.995 16 140 /usr/src/fio/parse.c 00:16:06.995 9964 956544 /usr/src/fio/iolog.c 00:16:06.995 1 904 libcrypto.so 00:16:06.995 ----------------------------------------------------- 00:16:06.995 00:16:07.256 00:16:07.256 real 0m14.435s 00:16:07.256 user 1m37.472s 00:16:07.256 sys 0m3.986s 00:16:07.256 12:35:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.256 ************************************ 00:16:07.256 END TEST bdev_fio_rw_verify 00:16:07.256 ************************************ 00:16:07.256 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.256 12:35:49 -- bdev/blockdev.sh@348 -- # rm -f 00:16:07.256 12:35:49 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.256 12:35:49 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:07.256 12:35:49 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.256 12:35:49 -- common/autotest_common.sh@1260 -- # local workload=trim 00:16:07.256 12:35:49 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:16:07.256 12:35:49 -- common/autotest_common.sh@1262 -- # local env_context= 00:16:07.256 12:35:49 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:16:07.256 12:35:49 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:07.256 12:35:49 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:16:07.256 12:35:49 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:16:07.256 12:35:49 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.256 12:35:49 -- common/autotest_common.sh@1280 -- # cat 00:16:07.256 12:35:49 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:16:07.256 12:35:49 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:16:07.256 12:35:49 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:16:07.256 12:35:49 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:07.257 12:35:49 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3db6f0e5-1efd-4e10-aba9-fe4a77e38807"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3db6f0e5-1efd-4e10-aba9-fe4a77e38807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1ee996a5-1308-5678-a3ee-27c89587608b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1ee996a5-1308-5678-a3ee-27c89587608b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "54c2abfd-d526-5a53-8ffd-59f34db7ba3b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "54c2abfd-d526-5a53-8ffd-59f34db7ba3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bc0f33f8-6954-535e-8474-bfa1338d9296"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bc0f33f8-6954-535e-8474-bfa1338d9296",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "af4fd4b4-4d4f-52d6-92b7-32a39d8fab2f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "af4fd4b4-4d4f-52d6-92b7-32a39d8fab2f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "30347733-0a7b-53d5-97f2-5b7c5ed617aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "30347733-0a7b-53d5-97f2-5b7c5ed617aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "1a8f75ca-d20f-5e7d-8250-90cd9fbc8bef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1a8f75ca-d20f-5e7d-8250-90cd9fbc8bef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9ccaa916-eb43-5920-aec8-32dfa368941d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ccaa916-eb43-5920-aec8-32dfa368941d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "d939a2a7-a886-52c3-8e99-eb4b0df18d52"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d939a2a7-a886-52c3-8e99-eb4b0df18d52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "9c0eda2c-25ec-5379-9823-301d5d73b3e0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9c0eda2c-25ec-5379-9823-301d5d73b3e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "324ca86c-bc11-5a5c-b54f-e0c6d826d185"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "324ca86c-bc11-5a5c-b54f-e0c6d826d185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "d72252d0-c185-549e-b58a-ba8a2f73cc89"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d72252d0-c185-549e-b58a-ba8a2f73cc89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "8a119d07-1c81-464a-b953-f21ff82ac9b0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8a119d07-1c81-464a-b953-f21ff82ac9b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8a119d07-1c81-464a-b953-f21ff82ac9b0",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "089d8dae-30fd-431b-ae45-ca7ca2919929",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8c6eccc2-4007-4b3b-ab05-a84cddc860ad",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "92c2d48f-e61d-4ef9-a606-4fc2057d7a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "92c2d48f-e61d-4ef9-a606-4fc2057d7a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "92c2d48f-e61d-4ef9-a606-4fc2057d7a94",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "fd82e449-01af-4383-9e2b-20df2fa74a41",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4e511596-0ded-419c-8a7d-7f7373835034",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d3120532-d5df-4159-83db-435f7c41551f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d3120532-d5df-4159-83db-435f7c41551f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d3120532-d5df-4159-83db-435f7c41551f",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "84fb3ce4-5842-46fd-81cf-2dbc2040d858",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b5a51e76-dd2b-4789-a94a-00440461fd6e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b82abbcb-338b-451f-9bf1-db267deaea60"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b82abbcb-338b-451f-9bf1-db267deaea60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:16:07.257 12:35:49 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:16:07.257 Malloc1p0 00:16:07.257 Malloc1p1 00:16:07.257 Malloc2p0 00:16:07.257 Malloc2p1 00:16:07.257 Malloc2p2 00:16:07.257 Malloc2p3 00:16:07.257 Malloc2p4 00:16:07.257 Malloc2p5 00:16:07.257 Malloc2p6 00:16:07.257 Malloc2p7 00:16:07.257 TestPT 00:16:07.257 raid0 00:16:07.257 concat0 ]] 00:16:07.257 12:35:49 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:07.258 12:35:49 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3db6f0e5-1efd-4e10-aba9-fe4a77e38807"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3db6f0e5-1efd-4e10-aba9-fe4a77e38807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1ee996a5-1308-5678-a3ee-27c89587608b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1ee996a5-1308-5678-a3ee-27c89587608b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "54c2abfd-d526-5a53-8ffd-59f34db7ba3b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "54c2abfd-d526-5a53-8ffd-59f34db7ba3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bc0f33f8-6954-535e-8474-bfa1338d9296"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bc0f33f8-6954-535e-8474-bfa1338d9296",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "af4fd4b4-4d4f-52d6-92b7-32a39d8fab2f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "af4fd4b4-4d4f-52d6-92b7-32a39d8fab2f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "30347733-0a7b-53d5-97f2-5b7c5ed617aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "30347733-0a7b-53d5-97f2-5b7c5ed617aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "1a8f75ca-d20f-5e7d-8250-90cd9fbc8bef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1a8f75ca-d20f-5e7d-8250-90cd9fbc8bef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9ccaa916-eb43-5920-aec8-32dfa368941d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ccaa916-eb43-5920-aec8-32dfa368941d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "d939a2a7-a886-52c3-8e99-eb4b0df18d52"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d939a2a7-a886-52c3-8e99-eb4b0df18d52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "9c0eda2c-25ec-5379-9823-301d5d73b3e0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9c0eda2c-25ec-5379-9823-301d5d73b3e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "324ca86c-bc11-5a5c-b54f-e0c6d826d185"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "324ca86c-bc11-5a5c-b54f-e0c6d826d185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "d72252d0-c185-549e-b58a-ba8a2f73cc89"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d72252d0-c185-549e-b58a-ba8a2f73cc89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "8a119d07-1c81-464a-b953-f21ff82ac9b0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8a119d07-1c81-464a-b953-f21ff82ac9b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8a119d07-1c81-464a-b953-f21ff82ac9b0",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "089d8dae-30fd-431b-ae45-ca7ca2919929",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8c6eccc2-4007-4b3b-ab05-a84cddc860ad",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "92c2d48f-e61d-4ef9-a606-4fc2057d7a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "92c2d48f-e61d-4ef9-a606-4fc2057d7a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "92c2d48f-e61d-4ef9-a606-4fc2057d7a94",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "fd82e449-01af-4383-9e2b-20df2fa74a41",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4e511596-0ded-419c-8a7d-7f7373835034",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d3120532-d5df-4159-83db-435f7c41551f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d3120532-d5df-4159-83db-435f7c41551f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d3120532-d5df-4159-83db-435f7c41551f",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "84fb3ce4-5842-46fd-81cf-2dbc2040d858",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b5a51e76-dd2b-4789-a94a-00440461fd6e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b82abbcb-338b-451f-9bf1-db267deaea60"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b82abbcb-338b-451f-9bf1-db267deaea60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:16:07.258 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.258 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:16:07.258 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:16:07.258 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.258 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:16:07.258 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:16:07.259 12:35:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:16:07.259 12:35:49 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:16:07.259 12:35:49 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:16:07.259 12:35:49 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:07.259 12:35:49 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:16:07.259 12:35:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:07.259 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.259 ************************************ 00:16:07.259 START TEST bdev_fio_trim 00:16:07.259 ************************************ 00:16:07.259 12:35:49 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:07.259 12:35:49 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:07.259 12:35:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:07.259 12:35:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:07.259 12:35:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:07.259 12:35:49 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.259 12:35:49 -- common/autotest_common.sh@1320 -- # shift 00:16:07.259 12:35:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:07.259 12:35:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:07.259 12:35:49 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.259 12:35:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:07.259 12:35:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:07.259 12:35:49 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:16:07.259 12:35:49 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:16:07.259 12:35:49 -- common/autotest_common.sh@1326 -- # break 00:16:07.259 12:35:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:07.259 12:35:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:07.518 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.518 fio-3.35 00:16:07.518 Starting 14 threads 00:16:19.721 00:16:19.721 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=111005: Tue Oct 1 12:36:01 2024 00:16:19.721 write: IOPS=139k, BW=544MiB/s (570MB/s)(5445MiB/10010msec); 0 zone resets 00:16:19.721 slat (nsec): min=1892, max=28034k, avg=36761.12, stdev=405815.41 00:16:19.721 clat (usec): min=15, max=32028, avg=248.57, stdev=1051.45 00:16:19.721 lat (usec): min=28, max=32052, avg=285.34, stdev=1126.33 00:16:19.721 clat percentiles (usec): 00:16:19.721 | 50.000th=[ 169], 99.000th=[ 433], 99.900th=[16188], 99.990th=[17433], 00:16:19.721 | 99.999th=[25297] 00:16:19.721 bw ( KiB/s): min=383232, max=788160, per=100.00%, avg=557207.91, stdev=10488.50, samples=268 00:16:19.721 iops : min=95808, max=197040, avg=139302.06, stdev=2622.12, samples=268 00:16:19.721 trim: IOPS=139k, BW=544MiB/s (570MB/s)(5445MiB/10010msec); 0 zone resets 00:16:19.721 slat (usec): min=3, max=28028, avg=25.44, stdev=334.47 00:16:19.721 clat (usec): min=3, max=32052, avg=273.29, stdev=1102.17 00:16:19.721 lat (usec): min=11, max=32069, avg=298.73, stdev=1151.54 00:16:19.721 clat percentiles (usec): 00:16:19.721 | 50.000th=[ 190], 99.000th=[ 371], 99.900th=[16319], 99.990th=[17171], 00:16:19.721 | 99.999th=[27395] 00:16:19.721 bw ( KiB/s): min=383240, max=788160, per=100.00%, avg=557208.33, stdev=10488.51, samples=268 00:16:19.721 iops : min=95810, max=197040, avg=139302.06, stdev=2622.13, samples=268 00:16:19.721 lat (usec) : 4=0.01%, 10=0.07%, 20=0.25%, 50=1.04%, 100=7.71% 00:16:19.721 lat (usec) : 250=73.30%, 500=16.99%, 750=0.10%, 1000=0.01% 00:16:19.721 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.47%, 50=0.01% 00:16:19.721 cpu : usr=69.22%, sys=0.33%, ctx=171871, majf=0, minf=798 00:16:19.721 IO depths : 1=12.3%, 2=24.6%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.721 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.721 issued rwts: total=0,1393805,1393806,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.721 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:19.721 00:16:19.721 Run status group 0 (all jobs): 00:16:19.721 WRITE: bw=544MiB/s (570MB/s), 544MiB/s-544MiB/s (570MB/s-570MB/s), io=5445MiB (5709MB), run=10010-10010msec 00:16:19.721 TRIM: bw=544MiB/s (570MB/s), 544MiB/s-544MiB/s (570MB/s-570MB/s), io=5445MiB (5709MB), run=10010-10010msec 00:16:21.624 ----------------------------------------------------- 00:16:21.624 Suppressions used: 00:16:21.624 count bytes template 00:16:21.624 14 129 /usr/src/fio/parse.c 00:16:21.624 1 904 libcrypto.so 00:16:21.624 ----------------------------------------------------- 00:16:21.624 00:16:21.624 00:16:21.624 real 0m14.052s 00:16:21.624 user 1m42.098s 00:16:21.624 sys 0m1.173s 00:16:21.624 12:36:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.624 12:36:03 -- common/autotest_common.sh@10 -- # set +x 00:16:21.624 ************************************ 00:16:21.624 END TEST bdev_fio_trim 00:16:21.624 ************************************ 00:16:21.624 12:36:03 -- bdev/blockdev.sh@366 -- # rm -f 00:16:21.624 12:36:03 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:21.624 12:36:03 -- bdev/blockdev.sh@368 -- # popd 00:16:21.624 /home/vagrant/spdk_repo/spdk 00:16:21.624 12:36:03 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:16:21.624 00:16:21.624 real 0m28.905s 00:16:21.624 user 3m19.803s 00:16:21.624 sys 0m5.327s 00:16:21.624 12:36:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.624 12:36:03 -- common/autotest_common.sh@10 -- # set +x 00:16:21.624 ************************************ 00:16:21.624 END TEST bdev_fio 00:16:21.624 ************************************ 00:16:21.624 12:36:03 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:21.624 12:36:03 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:21.624 12:36:03 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:16:21.624 12:36:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:21.624 12:36:03 -- common/autotest_common.sh@10 -- # set +x 00:16:21.624 ************************************ 00:16:21.624 START TEST bdev_verify 00:16:21.624 ************************************ 00:16:21.624 12:36:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:21.624 [2024-10-01 12:36:04.007325] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:21.624 [2024-10-01 12:36:04.007452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111198 ] 00:16:21.883 [2024-10-01 12:36:04.177408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:21.883 [2024-10-01 12:36:04.365049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.883 [2024-10-01 12:36:04.365058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.452 [2024-10-01 12:36:04.782407] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:16:22.452 [2024-10-01 12:36:04.782493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:16:22.452 [2024-10-01 12:36:04.790372] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:16:22.452 [2024-10-01 12:36:04.790433] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:16:22.452 [2024-10-01 12:36:04.798411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:22.452 [2024-10-01 12:36:04.798447] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:16:22.452 [2024-10-01 12:36:04.798473] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:16:22.712 [2024-10-01 12:36:05.044075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:22.712 [2024-10-01 12:36:05.044212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.712 [2024-10-01 12:36:05.044259] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:22.712 [2024-10-01 12:36:05.044278] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.712 [2024-10-01 12:36:05.046701] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.712 [2024-10-01 12:36:05.046753] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:16:22.971 Running I/O for 5 seconds... 00:16:28.247 00:16:28.247 Latency(us) 00:16:28.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.247 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x0 length 0x1000 00:16:28.247 Malloc0 : 5.15 1683.45 6.58 0.00 0.00 75332.85 1723.94 197923.98 00:16:28.247 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x1000 length 0x1000 00:16:28.247 Malloc0 : 5.14 1764.07 6.89 0.00 0.00 72029.23 1625.24 215610.81 00:16:28.247 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x0 length 0x800 00:16:28.247 Malloc1p0 : 5.15 1152.74 4.50 0.00 0.00 109950.40 6606.24 186132.77 00:16:28.247 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x800 length 0x800 00:16:28.247 Malloc1p0 : 5.14 1220.10 4.77 0.00 0.00 104032.97 3289.96 135598.98 00:16:28.247 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x0 length 0x800 00:16:28.247 Malloc1p1 : 5.15 1152.48 4.50 0.00 0.00 109634.70 7053.67 176868.24 00:16:28.247 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x800 length 0x800 00:16:28.247 Malloc1p1 : 5.14 1219.87 4.77 0.00 0.00 103904.21 3737.39 131387.84 00:16:28.247 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x0 length 0x200 00:16:28.247 Malloc2p0 : 5.18 1161.65 4.54 0.00 0.00 108801.09 7001.03 165919.25 00:16:28.247 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x200 length 0x200 00:16:28.247 Malloc2p0 : 5.14 1219.65 4.76 0.00 0.00 103745.41 4053.23 128018.92 00:16:28.247 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x0 length 0x200 00:16:28.247 Malloc2p1 : 5.18 1161.44 4.54 0.00 0.00 108523.13 6737.84 158339.19 00:16:28.247 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x200 length 0x200 00:16:28.247 Malloc2p1 : 5.15 1219.42 4.76 0.00 0.00 103597.52 3842.67 123807.77 00:16:28.247 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x0 length 0x200 00:16:28.247 Malloc2p2 : 5.19 1161.24 4.54 0.00 0.00 108221.16 6948.40 151601.35 00:16:28.247 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x200 length 0x200 00:16:28.247 Malloc2p2 : 5.15 1219.19 4.76 0.00 0.00 103436.06 3947.95 120438.85 00:16:28.247 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x0 length 0x200 00:16:28.247 Malloc2p3 : 5.19 1161.03 4.54 0.00 0.00 107943.75 6395.68 146547.97 00:16:28.247 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x200 length 0x200 00:16:28.247 Malloc2p3 : 5.16 1232.09 4.81 0.00 0.00 102570.20 4184.83 116227.70 00:16:28.247 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.247 Verification LBA range: start 0x0 length 0x200 00:16:28.247 Malloc2p4 : 5.19 1160.82 4.53 0.00 0.00 107711.41 3026.76 146547.97 00:16:28.248 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x200 length 0x200 00:16:28.248 Malloc2p4 : 5.16 1231.86 4.81 0.00 0.00 102405.21 3737.39 112858.78 00:16:28.248 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x0 length 0x200 00:16:28.248 Malloc2p5 : 5.19 1160.61 4.53 0.00 0.00 107521.64 6290.40 143179.05 00:16:28.248 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x200 length 0x200 00:16:28.248 Malloc2p5 : 5.16 1231.63 4.81 0.00 0.00 102252.74 3974.27 107384.29 00:16:28.248 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x0 length 0x200 00:16:28.248 Malloc2p6 : 5.20 1173.49 4.58 0.00 0.00 106272.38 4053.23 140652.36 00:16:28.248 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x200 length 0x200 00:16:28.248 Malloc2p6 : 5.16 1231.41 4.81 0.00 0.00 102097.18 3842.67 101909.80 00:16:28.248 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x0 length 0x200 00:16:28.248 Malloc2p7 : 5.20 1173.28 4.58 0.00 0.00 106148.32 2263.49 143179.05 00:16:28.248 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x200 length 0x200 00:16:28.248 Malloc2p7 : 5.17 1231.20 4.81 0.00 0.00 101977.56 3974.27 96014.19 00:16:28.248 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x0 length 0x1000 00:16:28.248 TestPT : 5.20 1173.08 4.58 0.00 0.00 106055.27 2434.57 145705.74 00:16:28.248 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x1000 length 0x1000 00:16:28.248 TestPT : 5.17 1220.72 4.77 0.00 0.00 102665.15 4237.47 93487.50 00:16:28.248 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x0 length 0x2000 00:16:28.248 raid0 : 5.20 1172.86 4.58 0.00 0.00 105938.73 2618.81 146547.97 00:16:28.248 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x2000 length 0x2000 00:16:28.248 raid0 : 5.17 1230.75 4.81 0.00 0.00 101684.46 3816.35 91381.92 00:16:28.248 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x0 length 0x2000 00:16:28.248 concat0 : 5.20 1172.64 4.58 0.00 0.00 105767.26 5895.61 138967.90 00:16:28.248 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x2000 length 0x2000 00:16:28.248 concat0 : 5.17 1230.53 4.81 0.00 0.00 101560.32 3816.35 91381.92 00:16:28.248 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x0 length 0x1000 00:16:28.248 raid1 : 5.20 1172.42 4.58 0.00 0.00 105488.63 6737.84 130545.61 00:16:28.248 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x1000 length 0x1000 00:16:28.248 raid1 : 5.17 1230.31 4.81 0.00 0.00 101397.47 4342.75 92224.15 00:16:28.248 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x0 length 0x4e2 00:16:28.248 AIO0 : 5.21 1186.56 4.64 0.00 0.00 103996.14 572.45 129703.38 00:16:28.248 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.248 Verification LBA range: start 0x4e2 length 0x4e2 00:16:28.248 AIO0 : 5.17 1229.55 4.80 0.00 0.00 101201.84 6895.76 95593.07 00:16:28.248 =================================================================================================================== 00:16:28.248 Total : 39342.17 153.68 0.00 0.00 102102.74 572.45 215610.81 00:16:30.784 00:16:30.784 real 0m9.254s 00:16:30.784 user 0m16.114s 00:16:30.784 sys 0m0.535s 00:16:30.784 12:36:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.784 12:36:13 -- common/autotest_common.sh@10 -- # set +x 00:16:30.784 ************************************ 00:16:30.784 END TEST bdev_verify 00:16:30.784 ************************************ 00:16:30.784 12:36:13 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:30.784 12:36:13 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:16:30.784 12:36:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:30.784 12:36:13 -- common/autotest_common.sh@10 -- # set +x 00:16:30.784 ************************************ 00:16:30.784 START TEST bdev_verify_big_io 00:16:30.784 ************************************ 00:16:30.784 12:36:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:31.044 [2024-10-01 12:36:13.330792] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:31.044 [2024-10-01 12:36:13.330929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111330 ] 00:16:31.044 [2024-10-01 12:36:13.506219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:31.303 [2024-10-01 12:36:13.695542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.303 [2024-10-01 12:36:13.695544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.882 [2024-10-01 12:36:14.116467] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:16:31.882 [2024-10-01 12:36:14.116554] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:16:31.882 [2024-10-01 12:36:14.124441] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:16:31.882 [2024-10-01 12:36:14.124504] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:16:31.882 [2024-10-01 12:36:14.132443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:31.882 [2024-10-01 12:36:14.132480] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:16:31.882 [2024-10-01 12:36:14.132508] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:16:31.882 [2024-10-01 12:36:14.350087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:31.882 [2024-10-01 12:36:14.350222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.882 [2024-10-01 12:36:14.350267] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:31.882 [2024-10-01 12:36:14.350286] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.882 [2024-10-01 12:36:14.352627] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.882 [2024-10-01 12:36:14.352679] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:16:32.450 [2024-10-01 12:36:14.771267] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:16:32.450 [2024-10-01 12:36:14.775088] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:16:32.450 [2024-10-01 12:36:14.779246] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:16:32.450 [2024-10-01 12:36:14.783356] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:16:32.450 [2024-10-01 12:36:14.787163] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:16:32.450 [2024-10-01 12:36:14.791207] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:16:32.450 [2024-10-01 12:36:14.795068] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:16:32.450 [2024-10-01 12:36:14.799147] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:16:32.450 [2024-10-01 12:36:14.802967] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:16:32.450 [2024-10-01 12:36:14.807215] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:16:32.451 [2024-10-01 12:36:14.811113] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:16:32.451 [2024-10-01 12:36:14.815246] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:16:32.451 [2024-10-01 12:36:14.818990] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:16:32.451 [2024-10-01 12:36:14.823099] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:16:32.451 [2024-10-01 12:36:14.827155] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:16:32.451 [2024-10-01 12:36:14.831017] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:16:32.451 [2024-10-01 12:36:14.922562] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:16:32.451 [2024-10-01 12:36:14.930268] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:16:32.451 Running I/O for 5 seconds... 00:16:39.017 00:16:39.017 Latency(us) 00:16:39.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.017 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x100 00:16:39.017 Malloc0 : 5.52 360.42 22.53 0.00 0.00 343722.35 20318.79 909608.10 00:16:39.017 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x100 length 0x100 00:16:39.017 Malloc0 : 5.51 361.77 22.61 0.00 0.00 342524.19 21371.58 1078054.04 00:16:39.017 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x80 00:16:39.017 Malloc1p0 : 5.52 260.07 16.25 0.00 0.00 472622.20 44638.18 1098267.55 00:16:39.017 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x80 length 0x80 00:16:39.017 Malloc1p0 : 5.67 210.84 13.18 0.00 0.00 574212.14 43374.83 976986.47 00:16:39.017 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x80 00:16:39.017 Malloc1p1 : 5.82 119.34 7.46 0.00 0.00 994234.46 42111.49 2021351.33 00:16:39.017 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x80 length 0x80 00:16:39.017 Malloc1p1 : 5.82 130.29 8.14 0.00 0.00 921047.92 42111.49 1953972.95 00:16:39.017 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x20 00:16:39.017 Malloc2p0 : 5.59 69.79 4.36 0.00 0.00 431824.74 6106.17 731055.40 00:16:39.017 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x20 length 0x20 00:16:39.017 Malloc2p0 : 5.60 72.91 4.56 0.00 0.00 412269.58 5948.25 626618.91 00:16:39.017 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x20 00:16:39.017 Malloc2p1 : 5.59 69.78 4.36 0.00 0.00 430073.74 7053.67 717579.72 00:16:39.017 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x20 length 0x20 00:16:39.017 Malloc2p1 : 5.60 72.89 4.56 0.00 0.00 410754.20 7158.95 613143.24 00:16:39.017 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x20 00:16:39.017 Malloc2p2 : 5.59 69.76 4.36 0.00 0.00 428241.42 6790.48 704104.04 00:16:39.017 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x20 length 0x20 00:16:39.017 Malloc2p2 : 5.60 72.88 4.55 0.00 0.00 408999.68 6948.40 599667.56 00:16:39.017 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x20 00:16:39.017 Malloc2p3 : 5.59 69.75 4.36 0.00 0.00 426464.17 6343.04 690628.37 00:16:39.017 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x20 length 0x20 00:16:39.017 Malloc2p3 : 5.60 72.86 4.55 0.00 0.00 407312.19 6527.28 589560.80 00:16:39.017 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x20 00:16:39.017 Malloc2p4 : 5.59 69.74 4.36 0.00 0.00 424618.35 6132.49 677152.69 00:16:39.017 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x20 length 0x20 00:16:39.017 Malloc2p4 : 5.60 72.85 4.55 0.00 0.00 405621.66 6369.36 576085.13 00:16:39.017 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x20 00:16:39.017 Malloc2p5 : 5.59 69.73 4.36 0.00 0.00 422905.74 6658.88 660308.10 00:16:39.017 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x20 length 0x20 00:16:39.017 Malloc2p5 : 5.60 72.84 4.55 0.00 0.00 404072.87 6895.76 562609.45 00:16:39.017 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x20 00:16:39.017 Malloc2p6 : 5.59 69.71 4.36 0.00 0.00 421205.53 5842.97 646832.42 00:16:39.017 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x20 length 0x20 00:16:39.017 Malloc2p6 : 5.60 72.83 4.55 0.00 0.00 402452.92 6474.64 549133.78 00:16:39.017 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x0 length 0x20 00:16:39.017 Malloc2p7 : 5.60 69.70 4.36 0.00 0.00 419528.93 7369.51 629987.83 00:16:39.017 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:16:39.017 Verification LBA range: start 0x20 length 0x20 00:16:39.018 Malloc2p7 : 5.60 72.81 4.55 0.00 0.00 400876.70 7632.71 535658.10 00:16:39.018 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:39.018 Verification LBA range: start 0x0 length 0x100 00:16:39.018 TestPT : 5.83 124.26 7.77 0.00 0.00 916641.70 39584.80 2007875.65 00:16:39.018 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:39.018 Verification LBA range: start 0x100 length 0x100 00:16:39.018 TestPT : 5.85 124.70 7.79 0.00 0.00 920954.22 55587.16 1980924.30 00:16:39.018 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:39.018 Verification LBA range: start 0x0 length 0x200 00:16:39.018 raid0 : 5.78 137.63 8.60 0.00 0.00 826367.02 46743.75 2021351.33 00:16:39.018 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:39.018 Verification LBA range: start 0x200 length 0x200 00:16:39.018 raid0 : 5.85 136.13 8.51 0.00 0.00 835097.09 42743.16 1927021.60 00:16:39.018 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:39.018 Verification LBA range: start 0x0 length 0x200 00:16:39.018 concat0 : 5.83 142.25 8.89 0.00 0.00 783586.93 30320.27 2034827.00 00:16:39.018 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:39.018 Verification LBA range: start 0x200 length 0x200 00:16:39.018 concat0 : 5.85 141.67 8.85 0.00 0.00 793236.13 43585.39 1913545.92 00:16:39.018 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:39.018 Verification LBA range: start 0x0 length 0x100 00:16:39.018 raid1 : 5.85 159.92 10.00 0.00 0.00 689769.61 21582.14 2048302.68 00:16:39.018 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:39.018 Verification LBA range: start 0x100 length 0x100 00:16:39.018 raid1 : 5.82 152.67 9.54 0.00 0.00 727427.84 25372.17 1927021.60 00:16:39.018 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:16:39.018 Verification LBA range: start 0x0 length 0x4e 00:16:39.018 AIO0 : 5.88 169.44 10.59 0.00 0.00 391441.38 1079.11 1185859.44 00:16:39.018 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:16:39.018 Verification LBA range: start 0x4e length 0x4e 00:16:39.018 AIO0 : 5.85 171.26 10.70 0.00 0.00 390733.46 980.41 1118481.07 00:16:39.018 =================================================================================================================== 00:16:39.018 Total : 4043.47 252.72 0.00 0.00 557699.69 980.41 2048302.68 00:16:41.553 00:16:41.553 real 0m10.422s 00:16:41.553 user 0m19.194s 00:16:41.553 sys 0m0.413s 00:16:41.553 12:36:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.553 12:36:23 -- common/autotest_common.sh@10 -- # set +x 00:16:41.553 ************************************ 00:16:41.553 END TEST bdev_verify_big_io 00:16:41.553 ************************************ 00:16:41.553 12:36:23 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:41.553 12:36:23 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:16:41.553 12:36:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:41.553 12:36:23 -- common/autotest_common.sh@10 -- # set +x 00:16:41.553 ************************************ 00:16:41.553 START TEST bdev_write_zeroes 00:16:41.553 ************************************ 00:16:41.553 12:36:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:41.553 [2024-10-01 12:36:23.835864] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:41.553 [2024-10-01 12:36:23.836024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111479 ] 00:16:41.553 [2024-10-01 12:36:23.998836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.813 [2024-10-01 12:36:24.187368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.381 [2024-10-01 12:36:24.607883] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:16:42.382 [2024-10-01 12:36:24.607983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:16:42.382 [2024-10-01 12:36:24.615848] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:16:42.382 [2024-10-01 12:36:24.615936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:16:42.382 [2024-10-01 12:36:24.623857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:42.382 [2024-10-01 12:36:24.623909] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:16:42.382 [2024-10-01 12:36:24.623950] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:16:42.382 [2024-10-01 12:36:24.839522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:42.382 [2024-10-01 12:36:24.839621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.382 [2024-10-01 12:36:24.839676] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:42.382 [2024-10-01 12:36:24.839704] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.382 [2024-10-01 12:36:24.841895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.382 [2024-10-01 12:36:24.841968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:16:42.951 Running I/O for 1 seconds... 00:16:43.889 00:16:43.889 Latency(us) 00:16:43.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.889 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc0 : 1.02 7141.11 27.89 0.00 0.00 17912.57 503.36 29899.16 00:16:43.889 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc1p0 : 1.02 7133.95 27.87 0.00 0.00 17908.60 681.02 29267.48 00:16:43.889 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc1p1 : 1.02 7127.23 27.84 0.00 0.00 17898.08 641.54 28635.81 00:16:43.889 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc2p0 : 1.02 7120.45 27.81 0.00 0.00 17889.50 608.64 28004.14 00:16:43.889 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc2p1 : 1.03 7113.47 27.79 0.00 0.00 17882.83 608.64 27583.02 00:16:43.889 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc2p2 : 1.03 7106.93 27.76 0.00 0.00 17871.89 608.64 26951.35 00:16:43.889 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc2p3 : 1.04 7136.52 27.88 0.00 0.00 17773.55 644.83 26319.68 00:16:43.889 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc2p4 : 1.04 7129.39 27.85 0.00 0.00 17758.66 651.41 25688.01 00:16:43.889 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc2p5 : 1.04 7122.73 27.82 0.00 0.00 17753.07 608.64 25056.33 00:16:43.889 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc2p6 : 1.04 7116.25 27.80 0.00 0.00 17745.25 608.64 24424.66 00:16:43.889 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 Malloc2p7 : 1.04 7109.86 27.77 0.00 0.00 17733.95 644.83 23898.27 00:16:43.889 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 TestPT : 1.05 7102.93 27.75 0.00 0.00 17724.68 638.25 23266.60 00:16:43.889 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 raid0 : 1.05 7094.99 27.71 0.00 0.00 17708.39 1065.95 22213.81 00:16:43.889 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 concat0 : 1.05 7086.96 27.68 0.00 0.00 17685.56 1059.37 21161.02 00:16:43.889 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 raid1 : 1.05 7077.22 27.65 0.00 0.00 17660.39 1697.62 19476.56 00:16:43.889 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.889 AIO0 : 1.05 7054.33 27.56 0.00 0.00 17649.64 1276.50 19160.73 00:16:43.889 =================================================================================================================== 00:16:43.889 Total : 113774.31 444.43 0.00 0.00 17784.10 503.36 29899.16 00:16:46.438 00:16:46.438 real 0m4.943s 00:16:46.438 user 0m4.370s 00:16:46.438 sys 0m0.368s 00:16:46.438 12:36:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.438 12:36:28 -- common/autotest_common.sh@10 -- # set +x 00:16:46.438 ************************************ 00:16:46.438 END TEST bdev_write_zeroes 00:16:46.438 ************************************ 00:16:46.438 12:36:28 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:46.438 12:36:28 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:16:46.438 12:36:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:46.438 12:36:28 -- common/autotest_common.sh@10 -- # set +x 00:16:46.438 ************************************ 00:16:46.438 START TEST bdev_json_nonenclosed 00:16:46.438 ************************************ 00:16:46.438 12:36:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:46.438 [2024-10-01 12:36:28.861527] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:46.438 [2024-10-01 12:36:28.861662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111562 ] 00:16:46.700 [2024-10-01 12:36:29.025501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.700 [2024-10-01 12:36:29.219514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.700 [2024-10-01 12:36:29.219700] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:46.700 [2024-10-01 12:36:29.219738] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:47.269 00:16:47.269 real 0m0.847s 00:16:47.269 user 0m0.626s 00:16:47.269 sys 0m0.121s 00:16:47.269 12:36:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.269 ************************************ 00:16:47.269 END TEST bdev_json_nonenclosed 00:16:47.269 12:36:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.269 ************************************ 00:16:47.269 12:36:29 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:47.269 12:36:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:16:47.269 12:36:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:47.269 12:36:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.269 ************************************ 00:16:47.269 START TEST bdev_json_nonarray 00:16:47.269 ************************************ 00:16:47.269 12:36:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:47.269 [2024-10-01 12:36:29.786810] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:47.269 [2024-10-01 12:36:29.786947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111600 ] 00:16:47.528 [2024-10-01 12:36:29.949467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.788 [2024-10-01 12:36:30.148211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.788 [2024-10-01 12:36:30.148382] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:47.788 [2024-10-01 12:36:30.148425] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:48.047 00:16:48.047 real 0m0.854s 00:16:48.047 user 0m0.613s 00:16:48.047 sys 0m0.141s 00:16:48.047 12:36:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:48.307 12:36:30 -- common/autotest_common.sh@10 -- # set +x 00:16:48.307 ************************************ 00:16:48.307 END TEST bdev_json_nonarray 00:16:48.307 ************************************ 00:16:48.307 12:36:30 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:16:48.307 12:36:30 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:16:48.307 12:36:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:48.307 12:36:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:48.307 12:36:30 -- common/autotest_common.sh@10 -- # set +x 00:16:48.307 ************************************ 00:16:48.307 START TEST bdev_qos 00:16:48.307 ************************************ 00:16:48.307 12:36:30 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:16:48.307 12:36:30 -- bdev/blockdev.sh@444 -- # QOS_PID=111638 00:16:48.307 Process qos testing pid: 111638 00:16:48.307 12:36:30 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 111638' 00:16:48.307 12:36:30 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:16:48.307 12:36:30 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:16:48.307 12:36:30 -- bdev/blockdev.sh@447 -- # waitforlisten 111638 00:16:48.307 12:36:30 -- common/autotest_common.sh@819 -- # '[' -z 111638 ']' 00:16:48.307 12:36:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.307 12:36:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:48.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.307 12:36:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.307 12:36:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:48.307 12:36:30 -- common/autotest_common.sh@10 -- # set +x 00:16:48.307 [2024-10-01 12:36:30.722113] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:48.307 [2024-10-01 12:36:30.722249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111638 ] 00:16:48.566 [2024-10-01 12:36:30.886743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.566 [2024-10-01 12:36:31.075506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.133 12:36:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:49.133 12:36:31 -- common/autotest_common.sh@852 -- # return 0 00:16:49.133 12:36:31 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:16:49.133 12:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.133 12:36:31 -- common/autotest_common.sh@10 -- # set +x 00:16:49.392 Malloc_0 00:16:49.392 12:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.392 12:36:31 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:16:49.392 12:36:31 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:16:49.392 12:36:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:49.392 12:36:31 -- common/autotest_common.sh@889 -- # local i 00:16:49.392 12:36:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:49.392 12:36:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:49.392 12:36:31 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:16:49.392 12:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.392 12:36:31 -- common/autotest_common.sh@10 -- # set +x 00:16:49.392 12:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.392 12:36:31 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:16:49.392 12:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.392 12:36:31 -- common/autotest_common.sh@10 -- # set +x 00:16:49.392 [ 00:16:49.392 { 00:16:49.392 "name": "Malloc_0", 00:16:49.392 "aliases": [ 00:16:49.392 "0cc5f7d5-8f53-4f3e-939f-a66e0acbf447" 00:16:49.392 ], 00:16:49.392 "product_name": "Malloc disk", 00:16:49.392 "block_size": 512, 00:16:49.392 "num_blocks": 262144, 00:16:49.392 "uuid": "0cc5f7d5-8f53-4f3e-939f-a66e0acbf447", 00:16:49.392 "assigned_rate_limits": { 00:16:49.392 "rw_ios_per_sec": 0, 00:16:49.392 "rw_mbytes_per_sec": 0, 00:16:49.392 "r_mbytes_per_sec": 0, 00:16:49.392 "w_mbytes_per_sec": 0 00:16:49.392 }, 00:16:49.392 "claimed": false, 00:16:49.392 "zoned": false, 00:16:49.392 "supported_io_types": { 00:16:49.392 "read": true, 00:16:49.392 "write": true, 00:16:49.392 "unmap": true, 00:16:49.392 "write_zeroes": true, 00:16:49.392 "flush": true, 00:16:49.392 "reset": true, 00:16:49.392 "compare": false, 00:16:49.392 "compare_and_write": false, 00:16:49.392 "abort": true, 00:16:49.392 "nvme_admin": false, 00:16:49.392 "nvme_io": false 00:16:49.392 }, 00:16:49.392 "memory_domains": [ 00:16:49.392 { 00:16:49.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.392 "dma_device_type": 2 00:16:49.392 } 00:16:49.392 ], 00:16:49.392 "driver_specific": {} 00:16:49.392 } 00:16:49.392 ] 00:16:49.392 12:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.392 12:36:31 -- common/autotest_common.sh@895 -- # return 0 00:16:49.392 12:36:31 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:16:49.392 12:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.392 12:36:31 -- common/autotest_common.sh@10 -- # set +x 00:16:49.392 Null_1 00:16:49.392 12:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.392 12:36:31 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:16:49.392 12:36:31 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:16:49.392 12:36:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:49.392 12:36:31 -- common/autotest_common.sh@889 -- # local i 00:16:49.392 12:36:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:49.392 12:36:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:49.392 12:36:31 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:16:49.392 12:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.392 12:36:31 -- common/autotest_common.sh@10 -- # set +x 00:16:49.392 12:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.392 12:36:31 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:16:49.392 12:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:49.392 12:36:31 -- common/autotest_common.sh@10 -- # set +x 00:16:49.392 [ 00:16:49.392 { 00:16:49.392 "name": "Null_1", 00:16:49.392 "aliases": [ 00:16:49.392 "9acadea2-8961-4c60-a9bc-37b1c936b7bc" 00:16:49.392 ], 00:16:49.392 "product_name": "Null disk", 00:16:49.392 "block_size": 512, 00:16:49.392 "num_blocks": 262144, 00:16:49.392 "uuid": "9acadea2-8961-4c60-a9bc-37b1c936b7bc", 00:16:49.392 "assigned_rate_limits": { 00:16:49.392 "rw_ios_per_sec": 0, 00:16:49.392 "rw_mbytes_per_sec": 0, 00:16:49.392 "r_mbytes_per_sec": 0, 00:16:49.392 "w_mbytes_per_sec": 0 00:16:49.392 }, 00:16:49.392 "claimed": false, 00:16:49.392 "zoned": false, 00:16:49.392 "supported_io_types": { 00:16:49.392 "read": true, 00:16:49.392 "write": true, 00:16:49.392 "unmap": false, 00:16:49.392 "write_zeroes": true, 00:16:49.392 "flush": false, 00:16:49.392 "reset": true, 00:16:49.392 "compare": false, 00:16:49.392 "compare_and_write": false, 00:16:49.392 "abort": true, 00:16:49.392 "nvme_admin": false, 00:16:49.392 "nvme_io": false 00:16:49.392 }, 00:16:49.392 "driver_specific": {} 00:16:49.392 } 00:16:49.392 ] 00:16:49.392 12:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:49.392 12:36:31 -- common/autotest_common.sh@895 -- # return 0 00:16:49.392 12:36:31 -- bdev/blockdev.sh@455 -- # qos_function_test 00:16:49.392 12:36:31 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:16:49.392 12:36:31 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:49.392 12:36:31 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:16:49.392 12:36:31 -- bdev/blockdev.sh@410 -- # local io_result=0 00:16:49.392 12:36:31 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:16:49.392 12:36:31 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:16:49.392 12:36:31 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:16:49.392 12:36:31 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:16:49.392 12:36:31 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:16:49.392 12:36:31 -- bdev/blockdev.sh@375 -- # local iostat_result 00:16:49.392 12:36:31 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:16:49.392 12:36:31 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:16:49.392 12:36:31 -- bdev/blockdev.sh@376 -- # tail -1 00:16:49.392 Running I/O for 60 seconds... 00:16:54.665 12:36:36 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 94666.72 378666.87 0.00 0.00 382976.00 0.00 0.00 ' 00:16:54.665 12:36:36 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:16:54.665 12:36:36 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:16:54.665 12:36:36 -- bdev/blockdev.sh@378 -- # iostat_result=94666.72 00:16:54.665 12:36:36 -- bdev/blockdev.sh@383 -- # echo 94666 00:16:54.665 12:36:36 -- bdev/blockdev.sh@414 -- # io_result=94666 00:16:54.665 12:36:36 -- bdev/blockdev.sh@416 -- # iops_limit=23000 00:16:54.665 12:36:36 -- bdev/blockdev.sh@417 -- # '[' 23000 -gt 1000 ']' 00:16:54.665 12:36:36 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 23000 Malloc_0 00:16:54.665 12:36:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.665 12:36:36 -- common/autotest_common.sh@10 -- # set +x 00:16:54.665 12:36:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.665 12:36:36 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 23000 IOPS Malloc_0 00:16:54.665 12:36:36 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:54.665 12:36:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:54.665 12:36:36 -- common/autotest_common.sh@10 -- # set +x 00:16:54.665 ************************************ 00:16:54.665 START TEST bdev_qos_iops 00:16:54.665 ************************************ 00:16:54.665 12:36:36 -- common/autotest_common.sh@1104 -- # run_qos_test 23000 IOPS Malloc_0 00:16:54.665 12:36:36 -- bdev/blockdev.sh@387 -- # local qos_limit=23000 00:16:54.665 12:36:36 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:16:54.665 12:36:36 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:16:54.665 12:36:36 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:16:54.665 12:36:36 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:16:54.665 12:36:36 -- bdev/blockdev.sh@375 -- # local iostat_result 00:16:54.665 12:36:36 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:16:54.665 12:36:36 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:16:54.665 12:36:36 -- bdev/blockdev.sh@376 -- # tail -1 00:17:00.014 12:36:42 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 23006.90 92027.59 0.00 0.00 93748.00 0.00 0.00 ' 00:17:00.014 12:36:42 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:17:00.014 12:36:42 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:17:00.014 12:36:42 -- bdev/blockdev.sh@378 -- # iostat_result=23006.90 00:17:00.014 12:36:42 -- bdev/blockdev.sh@383 -- # echo 23006 00:17:00.014 12:36:42 -- bdev/blockdev.sh@390 -- # qos_result=23006 00:17:00.014 12:36:42 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:17:00.014 12:36:42 -- bdev/blockdev.sh@394 -- # lower_limit=20700 00:17:00.014 12:36:42 -- bdev/blockdev.sh@395 -- # upper_limit=25300 00:17:00.014 12:36:42 -- bdev/blockdev.sh@398 -- # '[' 23006 -lt 20700 ']' 00:17:00.014 12:36:42 -- bdev/blockdev.sh@398 -- # '[' 23006 -gt 25300 ']' 00:17:00.014 00:17:00.014 real 0m5.186s 00:17:00.014 user 0m0.091s 00:17:00.014 sys 0m0.042s 00:17:00.014 12:36:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.014 12:36:42 -- common/autotest_common.sh@10 -- # set +x 00:17:00.014 ************************************ 00:17:00.014 END TEST bdev_qos_iops 00:17:00.014 ************************************ 00:17:00.014 12:36:42 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:17:00.014 12:36:42 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:17:00.014 12:36:42 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:17:00.014 12:36:42 -- bdev/blockdev.sh@375 -- # local iostat_result 00:17:00.014 12:36:42 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:17:00.014 12:36:42 -- bdev/blockdev.sh@376 -- # grep Null_1 00:17:00.014 12:36:42 -- bdev/blockdev.sh@376 -- # tail -1 00:17:05.289 12:36:47 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 32783.23 131132.94 0.00 0.00 133120.00 0.00 0.00 ' 00:17:05.289 12:36:47 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:17:05.289 12:36:47 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:17:05.289 12:36:47 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:17:05.289 12:36:47 -- bdev/blockdev.sh@380 -- # iostat_result=133120.00 00:17:05.289 12:36:47 -- bdev/blockdev.sh@383 -- # echo 133120 00:17:05.289 12:36:47 -- bdev/blockdev.sh@425 -- # bw_limit=133120 00:17:05.289 12:36:47 -- bdev/blockdev.sh@426 -- # bw_limit=13 00:17:05.289 12:36:47 -- bdev/blockdev.sh@427 -- # '[' 13 -lt 2 ']' 00:17:05.289 12:36:47 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 13 Null_1 00:17:05.289 12:36:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.289 12:36:47 -- common/autotest_common.sh@10 -- # set +x 00:17:05.289 12:36:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.289 12:36:47 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 13 BANDWIDTH Null_1 00:17:05.289 12:36:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:05.289 12:36:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:05.289 12:36:47 -- common/autotest_common.sh@10 -- # set +x 00:17:05.289 ************************************ 00:17:05.289 START TEST bdev_qos_bw 00:17:05.289 ************************************ 00:17:05.289 12:36:47 -- common/autotest_common.sh@1104 -- # run_qos_test 13 BANDWIDTH Null_1 00:17:05.289 12:36:47 -- bdev/blockdev.sh@387 -- # local qos_limit=13 00:17:05.289 12:36:47 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:17:05.289 12:36:47 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:17:05.289 12:36:47 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:17:05.289 12:36:47 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:17:05.289 12:36:47 -- bdev/blockdev.sh@375 -- # local iostat_result 00:17:05.289 12:36:47 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:17:05.289 12:36:47 -- bdev/blockdev.sh@376 -- # grep Null_1 00:17:05.289 12:36:47 -- bdev/blockdev.sh@376 -- # tail -1 00:17:10.561 12:36:52 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 3328.81 13315.22 0.00 0.00 13616.00 0.00 0.00 ' 00:17:10.561 12:36:52 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:17:10.561 12:36:52 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:17:10.561 12:36:52 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:17:10.561 12:36:52 -- bdev/blockdev.sh@380 -- # iostat_result=13616.00 00:17:10.561 12:36:52 -- bdev/blockdev.sh@383 -- # echo 13616 00:17:10.561 12:36:52 -- bdev/blockdev.sh@390 -- # qos_result=13616 00:17:10.561 12:36:52 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:17:10.561 12:36:52 -- bdev/blockdev.sh@392 -- # qos_limit=13312 00:17:10.561 12:36:52 -- bdev/blockdev.sh@394 -- # lower_limit=11980 00:17:10.561 12:36:52 -- bdev/blockdev.sh@395 -- # upper_limit=14643 00:17:10.561 12:36:52 -- bdev/blockdev.sh@398 -- # '[' 13616 -lt 11980 ']' 00:17:10.561 12:36:52 -- bdev/blockdev.sh@398 -- # '[' 13616 -gt 14643 ']' 00:17:10.561 00:17:10.561 real 0m5.212s 00:17:10.561 user 0m0.087s 00:17:10.561 sys 0m0.045s 00:17:10.561 12:36:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.561 12:36:52 -- common/autotest_common.sh@10 -- # set +x 00:17:10.561 ************************************ 00:17:10.561 END TEST bdev_qos_bw 00:17:10.562 ************************************ 00:17:10.562 12:36:52 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:17:10.562 12:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.562 12:36:52 -- common/autotest_common.sh@10 -- # set +x 00:17:10.562 12:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.562 12:36:52 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:17:10.562 12:36:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:10.562 12:36:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:10.562 12:36:52 -- common/autotest_common.sh@10 -- # set +x 00:17:10.562 ************************************ 00:17:10.562 START TEST bdev_qos_ro_bw 00:17:10.562 ************************************ 00:17:10.562 12:36:52 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:17:10.562 12:36:52 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:17:10.562 12:36:52 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:17:10.562 12:36:52 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:17:10.562 12:36:52 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:17:10.562 12:36:52 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:17:10.562 12:36:52 -- bdev/blockdev.sh@375 -- # local iostat_result 00:17:10.562 12:36:52 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:17:10.562 12:36:52 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:17:10.562 12:36:52 -- bdev/blockdev.sh@376 -- # tail -1 00:17:15.872 12:36:57 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.95 2047.82 0.00 0.00 2068.00 0.00 0.00 ' 00:17:15.872 12:36:57 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:17:15.872 12:36:57 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:17:15.872 12:36:57 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:17:15.872 12:36:57 -- bdev/blockdev.sh@380 -- # iostat_result=2068.00 00:17:15.872 12:36:57 -- bdev/blockdev.sh@383 -- # echo 2068 00:17:15.872 12:36:57 -- bdev/blockdev.sh@390 -- # qos_result=2068 00:17:15.872 12:36:57 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:17:15.872 12:36:57 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:17:15.872 12:36:57 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:17:15.872 12:36:57 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:17:15.872 12:36:57 -- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']' 00:17:15.872 12:36:57 -- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']' 00:17:15.872 00:17:15.872 real 0m5.154s 00:17:15.872 user 0m0.108s 00:17:15.872 sys 0m0.025s 00:17:15.872 12:36:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.872 12:36:57 -- common/autotest_common.sh@10 -- # set +x 00:17:15.872 ************************************ 00:17:15.872 END TEST bdev_qos_ro_bw 00:17:15.872 ************************************ 00:17:15.872 12:36:57 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:17:15.872 12:36:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.872 12:36:57 -- common/autotest_common.sh@10 -- # set +x 00:17:16.131 12:36:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:16.131 12:36:58 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:17:16.131 12:36:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:16.131 12:36:58 -- common/autotest_common.sh@10 -- # set +x 00:17:16.390 00:17:16.390 Latency(us) 00:17:16.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.390 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:17:16.390 Malloc_0 : 26.69 31765.73 124.08 0.00 0.00 7982.39 1566.02 501968.91 00:17:16.390 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:17:16.390 Null_1 : 26.90 31942.02 124.77 0.00 0.00 8000.34 519.81 197923.98 00:17:16.390 =================================================================================================================== 00:17:16.390 Total : 63707.75 248.86 0.00 0.00 7991.42 519.81 501968.91 00:17:16.390 0 00:17:16.390 12:36:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:16.390 12:36:58 -- bdev/blockdev.sh@459 -- # killprocess 111638 00:17:16.390 12:36:58 -- common/autotest_common.sh@926 -- # '[' -z 111638 ']' 00:17:16.390 12:36:58 -- common/autotest_common.sh@930 -- # kill -0 111638 00:17:16.390 12:36:58 -- common/autotest_common.sh@931 -- # uname 00:17:16.390 12:36:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:16.390 12:36:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111638 00:17:16.390 12:36:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:16.390 killing process with pid 111638 00:17:16.390 12:36:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:16.390 12:36:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111638' 00:17:16.390 Received shutdown signal, test time was about 26.943135 seconds 00:17:16.390 00:17:16.390 Latency(us) 00:17:16.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.390 =================================================================================================================== 00:17:16.391 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.391 12:36:58 -- common/autotest_common.sh@945 -- # kill 111638 00:17:16.391 12:36:58 -- common/autotest_common.sh@950 -- # wait 111638 00:17:17.785 12:37:00 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:17:17.785 00:17:17.785 real 0m29.621s 00:17:17.785 user 0m30.180s 00:17:17.785 sys 0m0.731s 00:17:17.785 12:37:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.785 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:17:17.785 ************************************ 00:17:17.785 END TEST bdev_qos 00:17:17.785 ************************************ 00:17:18.045 12:37:00 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:17:18.045 12:37:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:18.045 12:37:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:18.045 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:17:18.045 ************************************ 00:17:18.045 START TEST bdev_qd_sampling 00:17:18.045 ************************************ 00:17:18.045 12:37:00 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:17:18.045 12:37:00 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:17:18.045 12:37:00 -- bdev/blockdev.sh@539 -- # QD_PID=112120 00:17:18.045 Process bdev QD sampling period testing pid: 112120 00:17:18.045 12:37:00 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 112120' 00:17:18.045 12:37:00 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:17:18.045 12:37:00 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:17:18.045 12:37:00 -- bdev/blockdev.sh@542 -- # waitforlisten 112120 00:17:18.045 12:37:00 -- common/autotest_common.sh@819 -- # '[' -z 112120 ']' 00:17:18.045 12:37:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.045 12:37:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:18.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.045 12:37:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.045 12:37:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:18.045 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:17:18.045 [2024-10-01 12:37:00.423047] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:18.045 [2024-10-01 12:37:00.423177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112120 ] 00:17:18.305 [2024-10-01 12:37:00.592446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:18.305 [2024-10-01 12:37:00.780509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.305 [2024-10-01 12:37:00.780511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.872 12:37:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.872 12:37:01 -- common/autotest_common.sh@852 -- # return 0 00:17:18.872 12:37:01 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:17:18.872 12:37:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:18.872 12:37:01 -- common/autotest_common.sh@10 -- # set +x 00:17:19.132 Malloc_QD 00:17:19.132 12:37:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:19.132 12:37:01 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:17:19.132 12:37:01 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:17:19.132 12:37:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:19.132 12:37:01 -- common/autotest_common.sh@889 -- # local i 00:17:19.132 12:37:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:19.132 12:37:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:19.132 12:37:01 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:17:19.132 12:37:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:19.132 12:37:01 -- common/autotest_common.sh@10 -- # set +x 00:17:19.132 12:37:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:19.132 12:37:01 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:17:19.132 12:37:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:19.132 12:37:01 -- common/autotest_common.sh@10 -- # set +x 00:17:19.132 [ 00:17:19.132 { 00:17:19.132 "name": "Malloc_QD", 00:17:19.132 "aliases": [ 00:17:19.132 "00993ca6-897f-4db4-ad41-1cf5df3b621f" 00:17:19.132 ], 00:17:19.132 "product_name": "Malloc disk", 00:17:19.132 "block_size": 512, 00:17:19.132 "num_blocks": 262144, 00:17:19.132 "uuid": "00993ca6-897f-4db4-ad41-1cf5df3b621f", 00:17:19.132 "assigned_rate_limits": { 00:17:19.132 "rw_ios_per_sec": 0, 00:17:19.132 "rw_mbytes_per_sec": 0, 00:17:19.132 "r_mbytes_per_sec": 0, 00:17:19.132 "w_mbytes_per_sec": 0 00:17:19.132 }, 00:17:19.132 "claimed": false, 00:17:19.132 "zoned": false, 00:17:19.132 "supported_io_types": { 00:17:19.132 "read": true, 00:17:19.132 "write": true, 00:17:19.132 "unmap": true, 00:17:19.132 "write_zeroes": true, 00:17:19.132 "flush": true, 00:17:19.132 "reset": true, 00:17:19.132 "compare": false, 00:17:19.132 "compare_and_write": false, 00:17:19.132 "abort": true, 00:17:19.132 "nvme_admin": false, 00:17:19.132 "nvme_io": false 00:17:19.132 }, 00:17:19.132 "memory_domains": [ 00:17:19.132 { 00:17:19.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.132 "dma_device_type": 2 00:17:19.132 } 00:17:19.132 ], 00:17:19.132 "driver_specific": {} 00:17:19.132 } 00:17:19.132 ] 00:17:19.132 12:37:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:19.132 12:37:01 -- common/autotest_common.sh@895 -- # return 0 00:17:19.132 12:37:01 -- bdev/blockdev.sh@548 -- # sleep 2 00:17:19.132 12:37:01 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:19.132 Running I/O for 5 seconds... 00:17:21.037 12:37:03 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:17:21.037 12:37:03 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:17:21.037 12:37:03 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:17:21.037 12:37:03 -- bdev/blockdev.sh@519 -- # local iostats 00:17:21.037 12:37:03 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:17:21.037 12:37:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.037 12:37:03 -- common/autotest_common.sh@10 -- # set +x 00:17:21.037 12:37:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.037 12:37:03 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:17:21.037 12:37:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.037 12:37:03 -- common/autotest_common.sh@10 -- # set +x 00:17:21.037 12:37:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.037 12:37:03 -- bdev/blockdev.sh@523 -- # iostats='{ 00:17:21.037 "tick_rate": 2490000000, 00:17:21.037 "ticks": 2063360809338, 00:17:21.037 "bdevs": [ 00:17:21.037 { 00:17:21.037 "name": "Malloc_QD", 00:17:21.037 "bytes_read": 981504512, 00:17:21.037 "num_read_ops": 239619, 00:17:21.037 "bytes_written": 0, 00:17:21.037 "num_write_ops": 0, 00:17:21.037 "bytes_unmapped": 0, 00:17:21.037 "num_unmap_ops": 0, 00:17:21.037 "bytes_copied": 0, 00:17:21.037 "num_copy_ops": 0, 00:17:21.037 "read_latency_ticks": 2466254353644, 00:17:21.037 "max_read_latency_ticks": 15420542, 00:17:21.037 "min_read_latency_ticks": 324452, 00:17:21.037 "write_latency_ticks": 0, 00:17:21.037 "max_write_latency_ticks": 0, 00:17:21.037 "min_write_latency_ticks": 0, 00:17:21.037 "unmap_latency_ticks": 0, 00:17:21.037 "max_unmap_latency_ticks": 0, 00:17:21.037 "min_unmap_latency_ticks": 0, 00:17:21.037 "copy_latency_ticks": 0, 00:17:21.037 "max_copy_latency_ticks": 0, 00:17:21.037 "min_copy_latency_ticks": 0, 00:17:21.037 "io_error": {}, 00:17:21.037 "queue_depth_polling_period": 10, 00:17:21.037 "queue_depth": 512, 00:17:21.037 "io_time": 30, 00:17:21.037 "weighted_io_time": 15360 00:17:21.037 } 00:17:21.037 ] 00:17:21.037 }' 00:17:21.037 12:37:03 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:17:21.037 12:37:03 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:17:21.037 12:37:03 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:17:21.037 12:37:03 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:17:21.037 12:37:03 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:17:21.037 12:37:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.037 12:37:03 -- common/autotest_common.sh@10 -- # set +x 00:17:21.037 00:17:21.037 Latency(us) 00:17:21.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.037 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:17:21.037 Malloc_QD : 2.00 61614.70 240.68 0.00 0.00 4145.73 1039.63 6211.44 00:17:21.037 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:17:21.037 Malloc_QD : 2.00 61975.45 242.09 0.00 0.00 4122.06 631.67 4395.39 00:17:21.037 =================================================================================================================== 00:17:21.037 Total : 123590.15 482.77 0.00 0.00 4133.86 631.67 6211.44 00:17:21.296 0 00:17:21.296 12:37:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.296 12:37:03 -- bdev/blockdev.sh@552 -- # killprocess 112120 00:17:21.296 12:37:03 -- common/autotest_common.sh@926 -- # '[' -z 112120 ']' 00:17:21.296 12:37:03 -- common/autotest_common.sh@930 -- # kill -0 112120 00:17:21.296 12:37:03 -- common/autotest_common.sh@931 -- # uname 00:17:21.296 12:37:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.296 12:37:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112120 00:17:21.296 12:37:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:21.296 12:37:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:21.296 killing process with pid 112120 00:17:21.296 12:37:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112120' 00:17:21.296 12:37:03 -- common/autotest_common.sh@945 -- # kill 112120 00:17:21.296 Received shutdown signal, test time was about 2.166594 seconds 00:17:21.296 00:17:21.296 Latency(us) 00:17:21.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.296 =================================================================================================================== 00:17:21.296 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.296 12:37:03 -- common/autotest_common.sh@950 -- # wait 112120 00:17:22.672 12:37:05 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:17:22.672 00:17:22.672 real 0m4.833s 00:17:22.672 user 0m8.824s 00:17:22.672 sys 0m0.375s 00:17:22.672 ************************************ 00:17:22.672 END TEST bdev_qd_sampling 00:17:22.672 ************************************ 00:17:22.672 12:37:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.672 12:37:05 -- common/autotest_common.sh@10 -- # set +x 00:17:22.932 12:37:05 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:17:22.932 12:37:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:22.932 12:37:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:22.932 12:37:05 -- common/autotest_common.sh@10 -- # set +x 00:17:22.932 ************************************ 00:17:22.932 START TEST bdev_error 00:17:22.932 ************************************ 00:17:22.932 12:37:05 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:17:22.932 12:37:05 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:17:22.932 12:37:05 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:17:22.932 12:37:05 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:17:22.932 12:37:05 -- bdev/blockdev.sh@470 -- # ERR_PID=112207 00:17:22.932 12:37:05 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 112207' 00:17:22.932 Process error testing pid: 112207 00:17:22.932 12:37:05 -- bdev/blockdev.sh@472 -- # waitforlisten 112207 00:17:22.932 12:37:05 -- common/autotest_common.sh@819 -- # '[' -z 112207 ']' 00:17:22.932 12:37:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.932 12:37:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:22.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.932 12:37:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.932 12:37:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:22.932 12:37:05 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:17:22.932 12:37:05 -- common/autotest_common.sh@10 -- # set +x 00:17:22.932 [2024-10-01 12:37:05.340307] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:22.932 [2024-10-01 12:37:05.340584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112207 ] 00:17:23.190 [2024-10-01 12:37:05.506648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.190 [2024-10-01 12:37:05.692362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.758 12:37:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:23.758 12:37:06 -- common/autotest_common.sh@852 -- # return 0 00:17:23.759 12:37:06 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:17:23.759 12:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.759 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:24.018 Dev_1 00:17:24.018 12:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.018 12:37:06 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:17:24.018 12:37:06 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:17:24.018 12:37:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:24.018 12:37:06 -- common/autotest_common.sh@889 -- # local i 00:17:24.018 12:37:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:24.018 12:37:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:24.018 12:37:06 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:17:24.018 12:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.018 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:24.018 12:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.018 12:37:06 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:17:24.018 12:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.018 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:24.018 [ 00:17:24.018 { 00:17:24.018 "name": "Dev_1", 00:17:24.018 "aliases": [ 00:17:24.018 "e85beab7-9bce-48cb-bf75-bfc67f863b62" 00:17:24.018 ], 00:17:24.018 "product_name": "Malloc disk", 00:17:24.018 "block_size": 512, 00:17:24.018 "num_blocks": 262144, 00:17:24.018 "uuid": "e85beab7-9bce-48cb-bf75-bfc67f863b62", 00:17:24.018 "assigned_rate_limits": { 00:17:24.018 "rw_ios_per_sec": 0, 00:17:24.018 "rw_mbytes_per_sec": 0, 00:17:24.018 "r_mbytes_per_sec": 0, 00:17:24.018 "w_mbytes_per_sec": 0 00:17:24.018 }, 00:17:24.018 "claimed": false, 00:17:24.018 "zoned": false, 00:17:24.018 "supported_io_types": { 00:17:24.018 "read": true, 00:17:24.018 "write": true, 00:17:24.018 "unmap": true, 00:17:24.018 "write_zeroes": true, 00:17:24.018 "flush": true, 00:17:24.018 "reset": true, 00:17:24.018 "compare": false, 00:17:24.018 "compare_and_write": false, 00:17:24.018 "abort": true, 00:17:24.018 "nvme_admin": false, 00:17:24.018 "nvme_io": false 00:17:24.018 }, 00:17:24.018 "memory_domains": [ 00:17:24.018 { 00:17:24.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.018 "dma_device_type": 2 00:17:24.018 } 00:17:24.018 ], 00:17:24.018 "driver_specific": {} 00:17:24.018 } 00:17:24.018 ] 00:17:24.018 12:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.018 12:37:06 -- common/autotest_common.sh@895 -- # return 0 00:17:24.018 12:37:06 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:17:24.018 12:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.018 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:24.018 true 00:17:24.018 12:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.018 12:37:06 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:17:24.018 12:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.018 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:24.018 Dev_2 00:17:24.018 12:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.018 12:37:06 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:17:24.018 12:37:06 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:17:24.018 12:37:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:24.018 12:37:06 -- common/autotest_common.sh@889 -- # local i 00:17:24.018 12:37:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:24.018 12:37:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:24.018 12:37:06 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:17:24.018 12:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.018 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:24.018 12:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.018 12:37:06 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:17:24.018 12:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.018 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:24.018 [ 00:17:24.018 { 00:17:24.018 "name": "Dev_2", 00:17:24.018 "aliases": [ 00:17:24.018 "4de44897-1f98-4117-8d68-d83d798067ce" 00:17:24.018 ], 00:17:24.018 "product_name": "Malloc disk", 00:17:24.018 "block_size": 512, 00:17:24.018 "num_blocks": 262144, 00:17:24.018 "uuid": "4de44897-1f98-4117-8d68-d83d798067ce", 00:17:24.018 "assigned_rate_limits": { 00:17:24.018 "rw_ios_per_sec": 0, 00:17:24.018 "rw_mbytes_per_sec": 0, 00:17:24.018 "r_mbytes_per_sec": 0, 00:17:24.018 "w_mbytes_per_sec": 0 00:17:24.018 }, 00:17:24.018 "claimed": false, 00:17:24.018 "zoned": false, 00:17:24.018 "supported_io_types": { 00:17:24.018 "read": true, 00:17:24.018 "write": true, 00:17:24.018 "unmap": true, 00:17:24.018 "write_zeroes": true, 00:17:24.018 "flush": true, 00:17:24.018 "reset": true, 00:17:24.018 "compare": false, 00:17:24.018 "compare_and_write": false, 00:17:24.018 "abort": true, 00:17:24.018 "nvme_admin": false, 00:17:24.018 "nvme_io": false 00:17:24.018 }, 00:17:24.018 "memory_domains": [ 00:17:24.018 { 00:17:24.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.018 "dma_device_type": 2 00:17:24.018 } 00:17:24.018 ], 00:17:24.018 "driver_specific": {} 00:17:24.018 } 00:17:24.018 ] 00:17:24.018 12:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.018 12:37:06 -- common/autotest_common.sh@895 -- # return 0 00:17:24.018 12:37:06 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:17:24.018 12:37:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:24.018 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:24.018 12:37:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:24.018 12:37:06 -- bdev/blockdev.sh@482 -- # sleep 1 00:17:24.018 12:37:06 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:17:24.277 Running I/O for 5 seconds... 00:17:25.214 12:37:07 -- bdev/blockdev.sh@485 -- # kill -0 112207 00:17:25.214 Process is existed as continue on error is set. Pid: 112207 00:17:25.214 12:37:07 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 112207' 00:17:25.214 12:37:07 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:17:25.214 12:37:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:25.214 12:37:07 -- common/autotest_common.sh@10 -- # set +x 00:17:25.214 12:37:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:25.214 12:37:07 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:17:25.214 12:37:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:25.214 12:37:07 -- common/autotest_common.sh@10 -- # set +x 00:17:25.214 Timeout while waiting for response: 00:17:25.214 00:17:25.214 00:17:25.472 12:37:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:25.472 12:37:07 -- bdev/blockdev.sh@495 -- # sleep 5 00:17:29.687 00:17:29.687 Latency(us) 00:17:29.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.687 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:17:29.687 EE_Dev_1 : 0.93 55243.80 215.80 5.36 0.00 287.51 100.34 516.52 00:17:29.687 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:17:29.687 Dev_2 : 5.00 108546.60 424.01 0.00 0.00 145.29 45.44 358789.86 00:17:29.687 =================================================================================================================== 00:17:29.687 Total : 163790.40 639.81 5.36 0.00 157.63 45.44 358789.86 00:17:30.624 12:37:12 -- bdev/blockdev.sh@497 -- # killprocess 112207 00:17:30.624 12:37:12 -- common/autotest_common.sh@926 -- # '[' -z 112207 ']' 00:17:30.624 12:37:12 -- common/autotest_common.sh@930 -- # kill -0 112207 00:17:30.624 12:37:12 -- common/autotest_common.sh@931 -- # uname 00:17:30.624 12:37:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:30.624 12:37:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112207 00:17:30.624 12:37:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:30.624 12:37:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:30.624 12:37:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112207' 00:17:30.624 killing process with pid 112207 00:17:30.624 Received shutdown signal, test time was about 5.000000 seconds 00:17:30.624 00:17:30.624 Latency(us) 00:17:30.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.624 =================================================================================================================== 00:17:30.624 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.624 12:37:12 -- common/autotest_common.sh@945 -- # kill 112207 00:17:30.624 12:37:12 -- common/autotest_common.sh@950 -- # wait 112207 00:17:32.530 12:37:14 -- bdev/blockdev.sh@501 -- # ERR_PID=112330 00:17:32.530 12:37:14 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:17:32.530 Process error testing pid: 112330 00:17:32.530 12:37:14 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 112330' 00:17:32.530 12:37:14 -- bdev/blockdev.sh@503 -- # waitforlisten 112330 00:17:32.530 12:37:14 -- common/autotest_common.sh@819 -- # '[' -z 112330 ']' 00:17:32.530 12:37:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.530 12:37:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:32.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.530 12:37:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.530 12:37:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:32.530 12:37:14 -- common/autotest_common.sh@10 -- # set +x 00:17:32.530 [2024-10-01 12:37:14.619874] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:32.530 [2024-10-01 12:37:14.620056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112330 ] 00:17:32.530 [2024-10-01 12:37:14.785889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.530 [2024-10-01 12:37:14.977256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.097 12:37:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:33.097 12:37:15 -- common/autotest_common.sh@852 -- # return 0 00:17:33.097 12:37:15 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:17:33.097 12:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:33.097 12:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:33.097 Dev_1 00:17:33.097 12:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:33.097 12:37:15 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:17:33.097 12:37:15 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:17:33.097 12:37:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:33.097 12:37:15 -- common/autotest_common.sh@889 -- # local i 00:17:33.097 12:37:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:33.097 12:37:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:33.098 12:37:15 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:17:33.098 12:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:33.098 12:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:33.098 12:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:33.098 12:37:15 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:17:33.098 12:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:33.098 12:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:33.357 [ 00:17:33.357 { 00:17:33.357 "name": "Dev_1", 00:17:33.357 "aliases": [ 00:17:33.357 "a7b4612f-14cf-4680-871e-99f509cd64fc" 00:17:33.357 ], 00:17:33.357 "product_name": "Malloc disk", 00:17:33.357 "block_size": 512, 00:17:33.357 "num_blocks": 262144, 00:17:33.357 "uuid": "a7b4612f-14cf-4680-871e-99f509cd64fc", 00:17:33.357 "assigned_rate_limits": { 00:17:33.357 "rw_ios_per_sec": 0, 00:17:33.357 "rw_mbytes_per_sec": 0, 00:17:33.357 "r_mbytes_per_sec": 0, 00:17:33.357 "w_mbytes_per_sec": 0 00:17:33.357 }, 00:17:33.357 "claimed": false, 00:17:33.357 "zoned": false, 00:17:33.357 "supported_io_types": { 00:17:33.357 "read": true, 00:17:33.357 "write": true, 00:17:33.357 "unmap": true, 00:17:33.357 "write_zeroes": true, 00:17:33.357 "flush": true, 00:17:33.357 "reset": true, 00:17:33.357 "compare": false, 00:17:33.357 "compare_and_write": false, 00:17:33.357 "abort": true, 00:17:33.357 "nvme_admin": false, 00:17:33.357 "nvme_io": false 00:17:33.357 }, 00:17:33.357 "memory_domains": [ 00:17:33.357 { 00:17:33.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.357 "dma_device_type": 2 00:17:33.357 } 00:17:33.357 ], 00:17:33.357 "driver_specific": {} 00:17:33.357 } 00:17:33.357 ] 00:17:33.357 12:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:33.357 12:37:15 -- common/autotest_common.sh@895 -- # return 0 00:17:33.357 12:37:15 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:17:33.357 12:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:33.357 12:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:33.357 true 00:17:33.357 12:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:33.357 12:37:15 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:17:33.357 12:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:33.357 12:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:33.357 Dev_2 00:17:33.357 12:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:33.357 12:37:15 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:17:33.357 12:37:15 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:17:33.357 12:37:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:33.357 12:37:15 -- common/autotest_common.sh@889 -- # local i 00:17:33.357 12:37:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:33.357 12:37:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:33.357 12:37:15 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:17:33.357 12:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:33.357 12:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:33.357 12:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:33.357 12:37:15 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:17:33.357 12:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:33.357 12:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:33.357 [ 00:17:33.357 { 00:17:33.357 "name": "Dev_2", 00:17:33.357 "aliases": [ 00:17:33.357 "e1b8f07d-7d6e-4aac-8cd7-12f2aee0056e" 00:17:33.357 ], 00:17:33.357 "product_name": "Malloc disk", 00:17:33.357 "block_size": 512, 00:17:33.357 "num_blocks": 262144, 00:17:33.357 "uuid": "e1b8f07d-7d6e-4aac-8cd7-12f2aee0056e", 00:17:33.357 "assigned_rate_limits": { 00:17:33.357 "rw_ios_per_sec": 0, 00:17:33.357 "rw_mbytes_per_sec": 0, 00:17:33.357 "r_mbytes_per_sec": 0, 00:17:33.357 "w_mbytes_per_sec": 0 00:17:33.357 }, 00:17:33.357 "claimed": false, 00:17:33.357 "zoned": false, 00:17:33.357 "supported_io_types": { 00:17:33.357 "read": true, 00:17:33.357 "write": true, 00:17:33.357 "unmap": true, 00:17:33.357 "write_zeroes": true, 00:17:33.357 "flush": true, 00:17:33.357 "reset": true, 00:17:33.357 "compare": false, 00:17:33.357 "compare_and_write": false, 00:17:33.357 "abort": true, 00:17:33.357 "nvme_admin": false, 00:17:33.357 "nvme_io": false 00:17:33.357 }, 00:17:33.357 "memory_domains": [ 00:17:33.357 { 00:17:33.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.357 "dma_device_type": 2 00:17:33.357 } 00:17:33.357 ], 00:17:33.357 "driver_specific": {} 00:17:33.357 } 00:17:33.357 ] 00:17:33.357 12:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:33.357 12:37:15 -- common/autotest_common.sh@895 -- # return 0 00:17:33.357 12:37:15 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:17:33.357 12:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:33.357 12:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:33.357 12:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:33.357 12:37:15 -- bdev/blockdev.sh@513 -- # NOT wait 112330 00:17:33.357 12:37:15 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:17:33.357 12:37:15 -- common/autotest_common.sh@640 -- # local es=0 00:17:33.357 12:37:15 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 112330 00:17:33.357 12:37:15 -- common/autotest_common.sh@628 -- # local arg=wait 00:17:33.357 12:37:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:33.357 12:37:15 -- common/autotest_common.sh@632 -- # type -t wait 00:17:33.357 12:37:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:33.357 12:37:15 -- common/autotest_common.sh@643 -- # wait 112330 00:17:33.617 Running I/O for 5 seconds... 00:17:33.617 task offset: 103552 on job bdev=EE_Dev_1 fails 00:17:33.617 00:17:33.617 Latency(us) 00:17:33.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.617 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:17:33.617 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:17:33.617 EE_Dev_1 : 0.00 38732.39 151.30 8802.82 0.00 272.99 107.75 493.49 00:17:33.617 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:17:33.617 Dev_2 : 0.00 25000.00 97.66 0.00 0.00 462.60 98.70 855.39 00:17:33.617 =================================================================================================================== 00:17:33.617 Total : 63732.39 248.95 8802.82 0.00 375.83 98.70 855.39 00:17:33.617 [2024-10-01 12:37:15.935858] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:33.617 request: 00:17:33.617 { 00:17:33.617 "method": "perform_tests", 00:17:33.617 "req_id": 1 00:17:33.617 } 00:17:33.617 Got JSON-RPC error response 00:17:33.617 response: 00:17:33.617 { 00:17:33.617 "code": -32603, 00:17:33.617 "message": "bdevperf failed with error Operation not permitted" 00:17:33.617 } 00:17:35.526 12:37:17 -- common/autotest_common.sh@643 -- # es=255 00:17:35.526 12:37:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:35.526 12:37:17 -- common/autotest_common.sh@652 -- # es=127 00:17:35.526 12:37:17 -- common/autotest_common.sh@653 -- # case "$es" in 00:17:35.526 12:37:17 -- common/autotest_common.sh@660 -- # es=1 00:17:35.526 12:37:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:35.526 00:17:35.526 real 0m12.644s 00:17:35.526 user 0m12.619s 00:17:35.526 sys 0m0.800s 00:17:35.526 12:37:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.526 12:37:17 -- common/autotest_common.sh@10 -- # set +x 00:17:35.526 ************************************ 00:17:35.526 END TEST bdev_error 00:17:35.526 ************************************ 00:17:35.526 12:37:17 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:17:35.526 12:37:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:35.526 12:37:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:35.526 12:37:17 -- common/autotest_common.sh@10 -- # set +x 00:17:35.526 ************************************ 00:17:35.526 START TEST bdev_stat 00:17:35.526 ************************************ 00:17:35.526 12:37:18 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:17:35.526 12:37:18 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:17:35.526 12:37:18 -- bdev/blockdev.sh@594 -- # STAT_PID=112401 00:17:35.526 Process Bdev IO statistics testing pid: 112401 00:17:35.526 12:37:18 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 112401' 00:17:35.526 12:37:18 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:17:35.526 12:37:18 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:17:35.526 12:37:18 -- bdev/blockdev.sh@597 -- # waitforlisten 112401 00:17:35.526 12:37:18 -- common/autotest_common.sh@819 -- # '[' -z 112401 ']' 00:17:35.526 12:37:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.526 12:37:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:35.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.526 12:37:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.526 12:37:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:35.526 12:37:18 -- common/autotest_common.sh@10 -- # set +x 00:17:35.786 [2024-10-01 12:37:18.073852] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:35.786 [2024-10-01 12:37:18.074512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112401 ] 00:17:35.786 [2024-10-01 12:37:18.244236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:36.046 [2024-10-01 12:37:18.435447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.046 [2024-10-01 12:37:18.435448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.619 12:37:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:36.619 12:37:18 -- common/autotest_common.sh@852 -- # return 0 00:17:36.619 12:37:18 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:17:36.619 12:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.619 12:37:18 -- common/autotest_common.sh@10 -- # set +x 00:17:36.619 Malloc_STAT 00:17:36.619 12:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.619 12:37:19 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:17:36.619 12:37:19 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:17:36.619 12:37:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:36.619 12:37:19 -- common/autotest_common.sh@889 -- # local i 00:17:36.619 12:37:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:36.619 12:37:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:36.619 12:37:19 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:17:36.619 12:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.620 12:37:19 -- common/autotest_common.sh@10 -- # set +x 00:17:36.620 12:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.620 12:37:19 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:17:36.620 12:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.620 12:37:19 -- common/autotest_common.sh@10 -- # set +x 00:17:36.620 [ 00:17:36.620 { 00:17:36.620 "name": "Malloc_STAT", 00:17:36.620 "aliases": [ 00:17:36.620 "c060907d-ab85-4f80-94a3-956c28993c91" 00:17:36.620 ], 00:17:36.620 "product_name": "Malloc disk", 00:17:36.620 "block_size": 512, 00:17:36.620 "num_blocks": 262144, 00:17:36.620 "uuid": "c060907d-ab85-4f80-94a3-956c28993c91", 00:17:36.620 "assigned_rate_limits": { 00:17:36.620 "rw_ios_per_sec": 0, 00:17:36.620 "rw_mbytes_per_sec": 0, 00:17:36.620 "r_mbytes_per_sec": 0, 00:17:36.620 "w_mbytes_per_sec": 0 00:17:36.620 }, 00:17:36.620 "claimed": false, 00:17:36.620 "zoned": false, 00:17:36.620 "supported_io_types": { 00:17:36.620 "read": true, 00:17:36.620 "write": true, 00:17:36.620 "unmap": true, 00:17:36.620 "write_zeroes": true, 00:17:36.620 "flush": true, 00:17:36.620 "reset": true, 00:17:36.620 "compare": false, 00:17:36.620 "compare_and_write": false, 00:17:36.620 "abort": true, 00:17:36.620 "nvme_admin": false, 00:17:36.620 "nvme_io": false 00:17:36.620 }, 00:17:36.620 "memory_domains": [ 00:17:36.620 { 00:17:36.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.620 "dma_device_type": 2 00:17:36.620 } 00:17:36.620 ], 00:17:36.620 "driver_specific": {} 00:17:36.620 } 00:17:36.620 ] 00:17:36.620 12:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.620 12:37:19 -- common/autotest_common.sh@895 -- # return 0 00:17:36.620 12:37:19 -- bdev/blockdev.sh@603 -- # sleep 2 00:17:36.620 12:37:19 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:36.880 Running I/O for 10 seconds... 00:17:38.789 12:37:21 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:17:38.789 12:37:21 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:17:38.789 12:37:21 -- bdev/blockdev.sh@558 -- # local iostats 00:17:38.789 12:37:21 -- bdev/blockdev.sh@559 -- # local io_count1 00:17:38.789 12:37:21 -- bdev/blockdev.sh@560 -- # local io_count2 00:17:38.789 12:37:21 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:17:38.789 12:37:21 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:17:38.789 12:37:21 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:17:38.789 12:37:21 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:17:38.789 12:37:21 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:17:38.789 12:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:38.789 12:37:21 -- common/autotest_common.sh@10 -- # set +x 00:17:38.789 12:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:38.789 12:37:21 -- bdev/blockdev.sh@566 -- # iostats='{ 00:17:38.789 "tick_rate": 2490000000, 00:17:38.789 "ticks": 2107327552724, 00:17:38.789 "bdevs": [ 00:17:38.789 { 00:17:38.789 "name": "Malloc_STAT", 00:17:38.789 "bytes_read": 965775872, 00:17:38.789 "num_read_ops": 235779, 00:17:38.789 "bytes_written": 0, 00:17:38.789 "num_write_ops": 0, 00:17:38.789 "bytes_unmapped": 0, 00:17:38.789 "num_unmap_ops": 0, 00:17:38.789 "bytes_copied": 0, 00:17:38.789 "num_copy_ops": 0, 00:17:38.789 "read_latency_ticks": 2454286290950, 00:17:38.789 "max_read_latency_ticks": 15478310, 00:17:38.790 "min_read_latency_ticks": 289874, 00:17:38.790 "write_latency_ticks": 0, 00:17:38.790 "max_write_latency_ticks": 0, 00:17:38.790 "min_write_latency_ticks": 0, 00:17:38.790 "unmap_latency_ticks": 0, 00:17:38.790 "max_unmap_latency_ticks": 0, 00:17:38.790 "min_unmap_latency_ticks": 0, 00:17:38.790 "copy_latency_ticks": 0, 00:17:38.790 "max_copy_latency_ticks": 0, 00:17:38.790 "min_copy_latency_ticks": 0, 00:17:38.790 "io_error": {} 00:17:38.790 } 00:17:38.790 ] 00:17:38.790 }' 00:17:38.790 12:37:21 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:17:38.790 12:37:21 -- bdev/blockdev.sh@567 -- # io_count1=235779 00:17:38.790 12:37:21 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:17:38.790 12:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:38.790 12:37:21 -- common/autotest_common.sh@10 -- # set +x 00:17:38.790 12:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:38.790 12:37:21 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:17:38.790 "tick_rate": 2490000000, 00:17:38.790 "ticks": 2107471919888, 00:17:38.790 "name": "Malloc_STAT", 00:17:38.790 "channels": [ 00:17:38.790 { 00:17:38.790 "thread_id": 2, 00:17:38.790 "bytes_read": 495976448, 00:17:38.790 "num_read_ops": 121088, 00:17:38.790 "bytes_written": 0, 00:17:38.790 "num_write_ops": 0, 00:17:38.790 "bytes_unmapped": 0, 00:17:38.790 "num_unmap_ops": 0, 00:17:38.790 "bytes_copied": 0, 00:17:38.790 "num_copy_ops": 0, 00:17:38.790 "read_latency_ticks": 1262793093216, 00:17:38.790 "max_read_latency_ticks": 15478310, 00:17:38.790 "min_read_latency_ticks": 7801298, 00:17:38.790 "write_latency_ticks": 0, 00:17:38.790 "max_write_latency_ticks": 0, 00:17:38.790 "min_write_latency_ticks": 0, 00:17:38.790 "unmap_latency_ticks": 0, 00:17:38.790 "max_unmap_latency_ticks": 0, 00:17:38.790 "min_unmap_latency_ticks": 0, 00:17:38.790 "copy_latency_ticks": 0, 00:17:38.790 "max_copy_latency_ticks": 0, 00:17:38.790 "min_copy_latency_ticks": 0 00:17:38.790 }, 00:17:38.790 { 00:17:38.790 "thread_id": 3, 00:17:38.790 "bytes_read": 498073600, 00:17:38.790 "num_read_ops": 121600, 00:17:38.790 "bytes_written": 0, 00:17:38.790 "num_write_ops": 0, 00:17:38.790 "bytes_unmapped": 0, 00:17:38.790 "num_unmap_ops": 0, 00:17:38.790 "bytes_copied": 0, 00:17:38.790 "num_copy_ops": 0, 00:17:38.790 "read_latency_ticks": 1264477551644, 00:17:38.790 "max_read_latency_ticks": 11249880, 00:17:38.790 "min_read_latency_ticks": 7542918, 00:17:38.790 "write_latency_ticks": 0, 00:17:38.790 "max_write_latency_ticks": 0, 00:17:38.790 "min_write_latency_ticks": 0, 00:17:38.790 "unmap_latency_ticks": 0, 00:17:38.790 "max_unmap_latency_ticks": 0, 00:17:38.790 "min_unmap_latency_ticks": 0, 00:17:38.790 "copy_latency_ticks": 0, 00:17:38.790 "max_copy_latency_ticks": 0, 00:17:38.790 "min_copy_latency_ticks": 0 00:17:38.790 } 00:17:38.790 ] 00:17:38.790 }' 00:17:38.790 12:37:21 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:17:38.790 12:37:21 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=121088 00:17:38.790 12:37:21 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=121088 00:17:38.790 12:37:21 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:17:38.790 12:37:21 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=121600 00:17:38.790 12:37:21 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=242688 00:17:38.790 12:37:21 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:17:38.790 12:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:38.790 12:37:21 -- common/autotest_common.sh@10 -- # set +x 00:17:38.790 12:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:38.790 12:37:21 -- bdev/blockdev.sh@575 -- # iostats='{ 00:17:38.790 "tick_rate": 2490000000, 00:17:38.790 "ticks": 2107747260780, 00:17:38.790 "bdevs": [ 00:17:38.790 { 00:17:38.790 "name": "Malloc_STAT", 00:17:38.790 "bytes_read": 1048613376, 00:17:38.790 "num_read_ops": 256003, 00:17:38.790 "bytes_written": 0, 00:17:38.790 "num_write_ops": 0, 00:17:38.790 "bytes_unmapped": 0, 00:17:38.790 "num_unmap_ops": 0, 00:17:38.790 "bytes_copied": 0, 00:17:38.790 "num_copy_ops": 0, 00:17:38.790 "read_latency_ticks": 2669246681970, 00:17:38.790 "max_read_latency_ticks": 15478310, 00:17:38.790 "min_read_latency_ticks": 289874, 00:17:38.790 "write_latency_ticks": 0, 00:17:38.790 "max_write_latency_ticks": 0, 00:17:38.790 "min_write_latency_ticks": 0, 00:17:38.790 "unmap_latency_ticks": 0, 00:17:38.790 "max_unmap_latency_ticks": 0, 00:17:38.790 "min_unmap_latency_ticks": 0, 00:17:38.790 "copy_latency_ticks": 0, 00:17:38.790 "max_copy_latency_ticks": 0, 00:17:38.790 "min_copy_latency_ticks": 0, 00:17:38.790 "io_error": {} 00:17:38.790 } 00:17:38.790 ] 00:17:38.790 }' 00:17:38.790 12:37:21 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:17:38.790 12:37:21 -- bdev/blockdev.sh@576 -- # io_count2=256003 00:17:38.790 12:37:21 -- bdev/blockdev.sh@581 -- # '[' 242688 -lt 235779 ']' 00:17:38.790 12:37:21 -- bdev/blockdev.sh@581 -- # '[' 242688 -gt 256003 ']' 00:17:38.790 12:37:21 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:17:38.790 12:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:38.790 12:37:21 -- common/autotest_common.sh@10 -- # set +x 00:17:38.790 00:17:38.790 Latency(us) 00:17:38.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.790 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:17:38.790 Malloc_STAT : 2.16 60750.22 237.31 0.00 0.00 4204.95 1026.47 6237.76 00:17:38.790 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:17:38.790 Malloc_STAT : 2.16 61303.59 239.47 0.00 0.00 4167.12 733.66 4526.98 00:17:38.790 =================================================================================================================== 00:17:38.790 Total : 122053.81 476.77 0.00 0.00 4185.94 733.66 6237.76 00:17:39.083 0 00:17:39.083 12:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.083 12:37:21 -- bdev/blockdev.sh@607 -- # killprocess 112401 00:17:39.083 12:37:21 -- common/autotest_common.sh@926 -- # '[' -z 112401 ']' 00:17:39.083 12:37:21 -- common/autotest_common.sh@930 -- # kill -0 112401 00:17:39.083 12:37:21 -- common/autotest_common.sh@931 -- # uname 00:17:39.083 12:37:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.083 12:37:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112401 00:17:39.083 12:37:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:39.083 killing process with pid 112401 00:17:39.083 12:37:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:39.083 12:37:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112401' 00:17:39.083 Received shutdown signal, test time was about 2.321461 seconds 00:17:39.083 00:17:39.083 Latency(us) 00:17:39.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.083 =================================================================================================================== 00:17:39.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.083 12:37:21 -- common/autotest_common.sh@945 -- # kill 112401 00:17:39.083 12:37:21 -- common/autotest_common.sh@950 -- # wait 112401 00:17:40.466 12:37:22 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:17:40.466 00:17:40.466 real 0m4.982s 00:17:40.466 user 0m9.276s 00:17:40.466 sys 0m0.392s 00:17:40.466 ************************************ 00:17:40.466 END TEST bdev_stat 00:17:40.466 ************************************ 00:17:40.466 12:37:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.466 12:37:22 -- common/autotest_common.sh@10 -- # set +x 00:17:40.726 12:37:23 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:17:40.726 12:37:23 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:17:40.726 12:37:23 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:17:40.726 12:37:23 -- bdev/blockdev.sh@809 -- # cleanup 00:17:40.726 12:37:23 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:40.726 12:37:23 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:40.726 12:37:23 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:17:40.726 12:37:23 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:17:40.726 12:37:23 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:17:40.726 12:37:23 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:17:40.726 00:17:40.726 real 2m24.376s 00:17:40.726 user 5m50.180s 00:17:40.726 sys 0m20.985s 00:17:40.726 12:37:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.726 12:37:23 -- common/autotest_common.sh@10 -- # set +x 00:17:40.726 ************************************ 00:17:40.726 END TEST blockdev_general 00:17:40.726 ************************************ 00:17:40.726 12:37:23 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:17:40.726 12:37:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:40.726 12:37:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.726 12:37:23 -- common/autotest_common.sh@10 -- # set +x 00:17:40.726 ************************************ 00:17:40.726 START TEST bdev_raid 00:17:40.726 ************************************ 00:17:40.726 12:37:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:17:40.726 * Looking for test storage... 00:17:40.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:40.986 12:37:23 -- bdev/nbd_common.sh@6 -- # set -e 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@716 -- # uname -s 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:17:40.986 12:37:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:40.986 12:37:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.986 12:37:23 -- common/autotest_common.sh@10 -- # set +x 00:17:40.986 ************************************ 00:17:40.986 START TEST raid_function_test_raid0 00:17:40.986 ************************************ 00:17:40.986 12:37:23 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@86 -- # raid_pid=112559 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 112559' 00:17:40.986 Process raid pid: 112559 00:17:40.986 12:37:23 -- bdev/bdev_raid.sh@88 -- # waitforlisten 112559 /var/tmp/spdk-raid.sock 00:17:40.986 12:37:23 -- common/autotest_common.sh@819 -- # '[' -z 112559 ']' 00:17:40.986 12:37:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:40.986 12:37:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:40.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:40.986 12:37:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:40.987 12:37:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:40.987 12:37:23 -- common/autotest_common.sh@10 -- # set +x 00:17:40.987 [2024-10-01 12:37:23.383375] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:40.987 [2024-10-01 12:37:23.383956] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.246 [2024-10-01 12:37:23.549878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.246 [2024-10-01 12:37:23.704206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.505 [2024-10-01 12:37:23.855916] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.765 12:37:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:41.765 12:37:24 -- common/autotest_common.sh@852 -- # return 0 00:17:41.765 12:37:24 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:17:41.765 12:37:24 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:17:41.765 12:37:24 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:17:41.765 12:37:24 -- bdev/bdev_raid.sh@70 -- # cat 00:17:41.765 12:37:24 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:17:42.025 [2024-10-01 12:37:24.462554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:17:42.025 [2024-10-01 12:37:24.464437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:17:42.025 [2024-10-01 12:37:24.464507] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:42.025 [2024-10-01 12:37:24.464516] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:42.025 [2024-10-01 12:37:24.464632] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:42.025 [2024-10-01 12:37:24.464933] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:42.025 [2024-10-01 12:37:24.464952] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:17:42.025 [2024-10-01 12:37:24.465101] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.025 Base_1 00:17:42.025 Base_2 00:17:42.025 12:37:24 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:17:42.025 12:37:24 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:42.025 12:37:24 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:17:42.285 12:37:24 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:17:42.285 12:37:24 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:17:42.285 12:37:24 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:17:42.285 12:37:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:42.285 12:37:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:17:42.285 12:37:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:42.285 12:37:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:42.285 12:37:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:42.285 12:37:24 -- bdev/nbd_common.sh@12 -- # local i 00:17:42.285 12:37:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:42.285 12:37:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:42.285 12:37:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:17:42.545 [2024-10-01 12:37:24.834019] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:42.545 /dev/nbd0 00:17:42.545 12:37:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:42.545 12:37:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:42.545 12:37:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:17:42.545 12:37:24 -- common/autotest_common.sh@857 -- # local i 00:17:42.545 12:37:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:17:42.545 12:37:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:17:42.545 12:37:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:17:42.545 12:37:24 -- common/autotest_common.sh@861 -- # break 00:17:42.545 12:37:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:42.545 12:37:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:42.545 12:37:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.545 1+0 records in 00:17:42.545 1+0 records out 00:17:42.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296802 s, 13.8 MB/s 00:17:42.545 12:37:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.545 12:37:24 -- common/autotest_common.sh@874 -- # size=4096 00:17:42.545 12:37:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.545 12:37:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:17:42.545 12:37:24 -- common/autotest_common.sh@877 -- # return 0 00:17:42.545 12:37:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.545 12:37:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:42.545 12:37:24 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:17:42.545 12:37:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:42.545 12:37:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:17:42.545 12:37:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:42.545 { 00:17:42.545 "nbd_device": "/dev/nbd0", 00:17:42.545 "bdev_name": "raid" 00:17:42.545 } 00:17:42.545 ]' 00:17:42.545 12:37:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:42.545 { 00:17:42.545 "nbd_device": "/dev/nbd0", 00:17:42.545 "bdev_name": "raid" 00:17:42.545 } 00:17:42.545 ]' 00:17:42.545 12:37:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:42.804 12:37:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:42.804 12:37:25 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:42.804 12:37:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:42.804 12:37:25 -- bdev/nbd_common.sh@65 -- # count=1 00:17:42.804 12:37:25 -- bdev/nbd_common.sh@66 -- # echo 1 00:17:42.804 12:37:25 -- bdev/bdev_raid.sh@98 -- # count=1 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@20 -- # local blksize 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:17:42.805 4096+0 records in 00:17:42.805 4096+0 records out 00:17:42.805 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0337496 s, 62.1 MB/s 00:17:42.805 12:37:25 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:17:43.064 4096+0 records in 00:17:43.064 4096+0 records out 00:17:43.064 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.197355 s, 10.6 MB/s 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:17:43.064 128+0 records in 00:17:43.064 128+0 records out 00:17:43.064 65536 bytes (66 kB, 64 KiB) copied, 0.000865916 s, 75.7 MB/s 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:17:43.064 2035+0 records in 00:17:43.064 2035+0 records out 00:17:43.064 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0108721 s, 95.8 MB/s 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:17:43.064 12:37:25 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:17:43.064 456+0 records in 00:17:43.064 456+0 records out 00:17:43.064 233472 bytes (233 kB, 228 KiB) copied, 0.00331673 s, 70.4 MB/s 00:17:43.065 12:37:25 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:17:43.065 12:37:25 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:17:43.065 12:37:25 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:17:43.065 12:37:25 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:17:43.065 12:37:25 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:17:43.065 12:37:25 -- bdev/bdev_raid.sh@53 -- # return 0 00:17:43.065 12:37:25 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:17:43.065 12:37:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:43.065 12:37:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:43.065 12:37:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:43.065 12:37:25 -- bdev/nbd_common.sh@51 -- # local i 00:17:43.065 12:37:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.065 12:37:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:17:43.324 12:37:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:43.324 12:37:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:43.324 [2024-10-01 12:37:25.720322] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.324 12:37:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:43.324 12:37:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.324 12:37:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.324 12:37:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:43.324 12:37:25 -- bdev/nbd_common.sh@41 -- # break 00:17:43.324 12:37:25 -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.324 12:37:25 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:17:43.324 12:37:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:43.324 12:37:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:17:43.585 12:37:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:43.585 12:37:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:43.585 12:37:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:43.585 12:37:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:43.585 12:37:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:17:43.585 12:37:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:43.585 12:37:25 -- bdev/nbd_common.sh@65 -- # true 00:17:43.585 12:37:25 -- bdev/nbd_common.sh@65 -- # count=0 00:17:43.585 12:37:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:17:43.585 12:37:25 -- bdev/bdev_raid.sh@106 -- # count=0 00:17:43.585 12:37:25 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:17:43.585 12:37:25 -- bdev/bdev_raid.sh@111 -- # killprocess 112559 00:17:43.585 12:37:25 -- common/autotest_common.sh@926 -- # '[' -z 112559 ']' 00:17:43.585 12:37:25 -- common/autotest_common.sh@930 -- # kill -0 112559 00:17:43.585 12:37:25 -- common/autotest_common.sh@931 -- # uname 00:17:43.585 12:37:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:43.585 12:37:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112559 00:17:43.585 12:37:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:43.585 12:37:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:43.585 killing process with pid 112559 00:17:43.585 12:37:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112559' 00:17:43.585 12:37:25 -- common/autotest_common.sh@945 -- # kill 112559 00:17:43.585 [2024-10-01 12:37:25.981496] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.585 [2024-10-01 12:37:25.981576] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.585 [2024-10-01 12:37:25.981619] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.585 [2024-10-01 12:37:25.981627] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:17:43.585 12:37:25 -- common/autotest_common.sh@950 -- # wait 112559 00:17:43.845 [2024-10-01 12:37:26.142098] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.786 12:37:27 -- bdev/bdev_raid.sh@113 -- # return 0 00:17:44.786 00:17:44.786 real 0m3.888s 00:17:44.786 user 0m4.576s 00:17:44.786 sys 0m1.039s 00:17:44.786 12:37:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.786 12:37:27 -- common/autotest_common.sh@10 -- # set +x 00:17:44.786 ************************************ 00:17:44.786 END TEST raid_function_test_raid0 00:17:44.786 ************************************ 00:17:44.786 12:37:27 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:17:44.786 12:37:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:44.786 12:37:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:44.786 12:37:27 -- common/autotest_common.sh@10 -- # set +x 00:17:44.786 ************************************ 00:17:44.786 START TEST raid_function_test_concat 00:17:44.786 ************************************ 00:17:44.786 12:37:27 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:17:44.786 12:37:27 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:17:44.786 12:37:27 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:17:44.786 12:37:27 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:17:44.786 12:37:27 -- bdev/bdev_raid.sh@86 -- # raid_pid=112715 00:17:44.786 Process raid pid: 112715 00:17:44.786 12:37:27 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 112715' 00:17:44.786 12:37:27 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:44.786 12:37:27 -- bdev/bdev_raid.sh@88 -- # waitforlisten 112715 /var/tmp/spdk-raid.sock 00:17:44.786 12:37:27 -- common/autotest_common.sh@819 -- # '[' -z 112715 ']' 00:17:44.786 12:37:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:44.786 12:37:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:44.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:44.786 12:37:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:44.786 12:37:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:44.786 12:37:27 -- common/autotest_common.sh@10 -- # set +x 00:17:45.046 [2024-10-01 12:37:27.358243] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:45.046 [2024-10-01 12:37:27.358380] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.046 [2024-10-01 12:37:27.523823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.305 [2024-10-01 12:37:27.671626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.305 [2024-10-01 12:37:27.818427] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.873 12:37:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:45.873 12:37:28 -- common/autotest_common.sh@852 -- # return 0 00:17:45.873 12:37:28 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:17:45.873 12:37:28 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:17:45.873 12:37:28 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:17:45.873 12:37:28 -- bdev/bdev_raid.sh@70 -- # cat 00:17:45.873 12:37:28 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:17:46.132 [2024-10-01 12:37:28.416760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:17:46.132 [2024-10-01 12:37:28.418688] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:17:46.132 [2024-10-01 12:37:28.418766] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:46.132 [2024-10-01 12:37:28.418775] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:46.133 [2024-10-01 12:37:28.418913] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:46.133 [2024-10-01 12:37:28.419237] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:46.133 [2024-10-01 12:37:28.419265] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:17:46.133 [2024-10-01 12:37:28.419453] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.133 Base_1 00:17:46.133 Base_2 00:17:46.133 12:37:28 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:17:46.133 12:37:28 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:46.133 12:37:28 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:17:46.133 12:37:28 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:17:46.133 12:37:28 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:17:46.133 12:37:28 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:17:46.133 12:37:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:46.133 12:37:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:17:46.133 12:37:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.133 12:37:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:46.133 12:37:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.133 12:37:28 -- bdev/nbd_common.sh@12 -- # local i 00:17:46.133 12:37:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.133 12:37:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.133 12:37:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:17:46.392 [2024-10-01 12:37:28.792220] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:46.392 /dev/nbd0 00:17:46.392 12:37:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:46.392 12:37:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:46.392 12:37:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:17:46.392 12:37:28 -- common/autotest_common.sh@857 -- # local i 00:17:46.392 12:37:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:17:46.392 12:37:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:17:46.392 12:37:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:17:46.392 12:37:28 -- common/autotest_common.sh@861 -- # break 00:17:46.392 12:37:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:46.392 12:37:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:46.392 12:37:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.392 1+0 records in 00:17:46.392 1+0 records out 00:17:46.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003358 s, 12.2 MB/s 00:17:46.392 12:37:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.392 12:37:28 -- common/autotest_common.sh@874 -- # size=4096 00:17:46.392 12:37:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.392 12:37:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:17:46.392 12:37:28 -- common/autotest_common.sh@877 -- # return 0 00:17:46.392 12:37:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.392 12:37:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.392 12:37:28 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:17:46.392 12:37:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:46.392 12:37:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:17:46.661 12:37:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:46.661 { 00:17:46.661 "nbd_device": "/dev/nbd0", 00:17:46.661 "bdev_name": "raid" 00:17:46.661 } 00:17:46.661 ]' 00:17:46.661 12:37:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:46.661 12:37:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:46.661 { 00:17:46.661 "nbd_device": "/dev/nbd0", 00:17:46.661 "bdev_name": "raid" 00:17:46.661 } 00:17:46.661 ]' 00:17:46.661 12:37:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:46.661 12:37:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:46.661 12:37:29 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:46.661 12:37:29 -- bdev/nbd_common.sh@65 -- # count=1 00:17:46.661 12:37:29 -- bdev/nbd_common.sh@66 -- # echo 1 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@98 -- # count=1 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@20 -- # local blksize 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:17:46.661 12:37:29 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:17:46.662 12:37:29 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:17:46.662 12:37:29 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:17:46.662 12:37:29 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:17:46.662 12:37:29 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:17:46.662 4096+0 records in 00:17:46.662 4096+0 records out 00:17:46.662 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0246473 s, 85.1 MB/s 00:17:46.662 12:37:29 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:17:46.922 4096+0 records in 00:17:46.922 4096+0 records out 00:17:46.922 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.197779 s, 10.6 MB/s 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:17:46.922 128+0 records in 00:17:46.922 128+0 records out 00:17:46.922 65536 bytes (66 kB, 64 KiB) copied, 0.000908208 s, 72.2 MB/s 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:17:46.922 2035+0 records in 00:17:46.922 2035+0 records out 00:17:46.922 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0123734 s, 84.2 MB/s 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:17:46.922 456+0 records in 00:17:46.922 456+0 records out 00:17:46.922 233472 bytes (233 kB, 228 KiB) copied, 0.00338427 s, 69.0 MB/s 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@53 -- # return 0 00:17:46.922 12:37:29 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:17:46.922 12:37:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:46.922 12:37:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:46.922 12:37:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:46.922 12:37:29 -- bdev/nbd_common.sh@51 -- # local i 00:17:46.922 12:37:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:46.922 12:37:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:17:47.181 12:37:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.181 12:37:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.181 12:37:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.182 12:37:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.182 [2024-10-01 12:37:29.625251] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.182 12:37:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.182 12:37:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.182 12:37:29 -- bdev/nbd_common.sh@41 -- # break 00:17:47.182 12:37:29 -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.182 12:37:29 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:17:47.182 12:37:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:47.182 12:37:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:17:47.441 12:37:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:47.441 12:37:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:47.441 12:37:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:47.441 12:37:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:47.441 12:37:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:47.441 12:37:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:17:47.441 12:37:29 -- bdev/nbd_common.sh@65 -- # true 00:17:47.441 12:37:29 -- bdev/nbd_common.sh@65 -- # count=0 00:17:47.441 12:37:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:17:47.441 12:37:29 -- bdev/bdev_raid.sh@106 -- # count=0 00:17:47.441 12:37:29 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:17:47.441 12:37:29 -- bdev/bdev_raid.sh@111 -- # killprocess 112715 00:17:47.441 12:37:29 -- common/autotest_common.sh@926 -- # '[' -z 112715 ']' 00:17:47.441 12:37:29 -- common/autotest_common.sh@930 -- # kill -0 112715 00:17:47.441 12:37:29 -- common/autotest_common.sh@931 -- # uname 00:17:47.441 12:37:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:47.441 12:37:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112715 00:17:47.441 12:37:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:47.441 12:37:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:47.441 killing process with pid 112715 00:17:47.441 12:37:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112715' 00:17:47.441 12:37:29 -- common/autotest_common.sh@945 -- # kill 112715 00:17:47.441 [2024-10-01 12:37:29.898750] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.441 12:37:29 -- common/autotest_common.sh@950 -- # wait 112715 00:17:47.441 [2024-10-01 12:37:29.898856] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.441 [2024-10-01 12:37:29.898914] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.441 [2024-10-01 12:37:29.898924] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:17:47.702 [2024-10-01 12:37:30.061273] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.638 12:37:31 -- bdev/bdev_raid.sh@113 -- # return 0 00:17:48.638 00:17:48.638 real 0m3.838s 00:17:48.638 user 0m4.555s 00:17:48.638 sys 0m0.963s 00:17:48.638 12:37:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.638 12:37:31 -- common/autotest_common.sh@10 -- # set +x 00:17:48.638 ************************************ 00:17:48.638 END TEST raid_function_test_concat 00:17:48.638 ************************************ 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:17:48.896 12:37:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:48.896 12:37:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:48.896 12:37:31 -- common/autotest_common.sh@10 -- # set +x 00:17:48.896 ************************************ 00:17:48.896 START TEST raid0_resize_test 00:17:48.896 ************************************ 00:17:48.896 12:37:31 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@301 -- # raid_pid=112867 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 112867' 00:17:48.896 Process raid pid: 112867 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:48.896 12:37:31 -- bdev/bdev_raid.sh@303 -- # waitforlisten 112867 /var/tmp/spdk-raid.sock 00:17:48.896 12:37:31 -- common/autotest_common.sh@819 -- # '[' -z 112867 ']' 00:17:48.896 12:37:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:48.896 12:37:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:48.896 12:37:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:48.896 12:37:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.896 12:37:31 -- common/autotest_common.sh@10 -- # set +x 00:17:48.896 [2024-10-01 12:37:31.274327] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:48.896 [2024-10-01 12:37:31.274476] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.155 [2024-10-01 12:37:31.437748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.155 [2024-10-01 12:37:31.591842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.412 [2024-10-01 12:37:31.746524] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.669 12:37:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.669 12:37:32 -- common/autotest_common.sh@852 -- # return 0 00:17:49.669 12:37:32 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:17:49.928 Base_1 00:17:49.928 12:37:32 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:17:49.928 Base_2 00:17:49.928 12:37:32 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:17:50.187 [2024-10-01 12:37:32.595084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:17:50.187 [2024-10-01 12:37:32.596921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:17:50.187 [2024-10-01 12:37:32.596981] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:50.187 [2024-10-01 12:37:32.596989] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:50.187 [2024-10-01 12:37:32.597115] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450 00:17:50.187 [2024-10-01 12:37:32.597401] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:50.187 [2024-10-01 12:37:32.597430] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006f80 00:17:50.187 [2024-10-01 12:37:32.597615] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.187 12:37:32 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:17:50.446 [2024-10-01 12:37:32.778821] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:17:50.446 [2024-10-01 12:37:32.778854] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:17:50.446 true 00:17:50.446 12:37:32 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:17:50.446 12:37:32 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:17:50.446 [2024-10-01 12:37:32.962651] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.704 12:37:32 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:17:50.704 12:37:32 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:17:50.704 12:37:32 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:17:50.704 12:37:32 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:17:50.705 [2024-10-01 12:37:33.142259] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:17:50.705 [2024-10-01 12:37:33.142288] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:17:50.705 [2024-10-01 12:37:33.142344] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:17:50.705 [2024-10-01 12:37:33.142395] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:50.705 true 00:17:50.705 12:37:33 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:17:50.705 12:37:33 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:17:50.964 [2024-10-01 12:37:33.330112] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.964 12:37:33 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:17:50.964 12:37:33 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:17:50.964 12:37:33 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:17:50.964 12:37:33 -- bdev/bdev_raid.sh@332 -- # killprocess 112867 00:17:50.964 12:37:33 -- common/autotest_common.sh@926 -- # '[' -z 112867 ']' 00:17:50.964 12:37:33 -- common/autotest_common.sh@930 -- # kill -0 112867 00:17:50.964 12:37:33 -- common/autotest_common.sh@931 -- # uname 00:17:50.964 12:37:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:50.964 12:37:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112867 00:17:50.964 12:37:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:50.964 12:37:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:50.964 killing process with pid 112867 00:17:50.964 12:37:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112867' 00:17:50.964 12:37:33 -- common/autotest_common.sh@945 -- # kill 112867 00:17:50.964 [2024-10-01 12:37:33.379822] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:50.964 [2024-10-01 12:37:33.379924] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.964 12:37:33 -- common/autotest_common.sh@950 -- # wait 112867 00:17:50.964 [2024-10-01 12:37:33.380001] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.964 [2024-10-01 12:37:33.380017] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Raid, state offline 00:17:50.964 [2024-10-01 12:37:33.380605] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@334 -- # return 0 00:17:52.378 00:17:52.378 real 0m3.236s 00:17:52.378 user 0m4.332s 00:17:52.378 sys 0m0.514s 00:17:52.378 12:37:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:52.378 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:17:52.378 ************************************ 00:17:52.378 END TEST raid0_resize_test 00:17:52.378 ************************************ 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:17:52.378 12:37:34 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:52.378 12:37:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:52.378 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:17:52.378 ************************************ 00:17:52.378 START TEST raid_state_function_test 00:17:52.378 ************************************ 00:17:52.378 12:37:34 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=112949 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 112949' 00:17:52.378 Process raid pid: 112949 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:52.378 12:37:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 112949 /var/tmp/spdk-raid.sock 00:17:52.378 12:37:34 -- common/autotest_common.sh@819 -- # '[' -z 112949 ']' 00:17:52.378 12:37:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:52.378 12:37:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:52.378 12:37:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:52.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:52.378 12:37:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:52.378 12:37:34 -- common/autotest_common.sh@10 -- # set +x 00:17:52.378 [2024-10-01 12:37:34.595206] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:52.378 [2024-10-01 12:37:34.595345] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.378 [2024-10-01 12:37:34.762268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.637 [2024-10-01 12:37:34.915052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.637 [2024-10-01 12:37:35.072343] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.896 12:37:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.896 12:37:35 -- common/autotest_common.sh@852 -- # return 0 00:17:52.896 12:37:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:53.155 [2024-10-01 12:37:35.566696] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.155 [2024-10-01 12:37:35.566764] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.155 [2024-10-01 12:37:35.566775] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.155 [2024-10-01 12:37:35.566806] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.155 12:37:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.413 12:37:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.413 "name": "Existed_Raid", 00:17:53.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.413 "strip_size_kb": 64, 00:17:53.413 "state": "configuring", 00:17:53.413 "raid_level": "raid0", 00:17:53.413 "superblock": false, 00:17:53.413 "num_base_bdevs": 2, 00:17:53.413 "num_base_bdevs_discovered": 0, 00:17:53.413 "num_base_bdevs_operational": 2, 00:17:53.413 "base_bdevs_list": [ 00:17:53.413 { 00:17:53.413 "name": "BaseBdev1", 00:17:53.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.413 "is_configured": false, 00:17:53.413 "data_offset": 0, 00:17:53.413 "data_size": 0 00:17:53.413 }, 00:17:53.413 { 00:17:53.413 "name": "BaseBdev2", 00:17:53.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.413 "is_configured": false, 00:17:53.413 "data_offset": 0, 00:17:53.413 "data_size": 0 00:17:53.413 } 00:17:53.413 ] 00:17:53.413 }' 00:17:53.413 12:37:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.413 12:37:35 -- common/autotest_common.sh@10 -- # set +x 00:17:53.981 12:37:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:53.981 [2024-10-01 12:37:36.453535] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.981 [2024-10-01 12:37:36.453733] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:53.981 12:37:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:54.239 [2024-10-01 12:37:36.609328] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.239 [2024-10-01 12:37:36.609554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.239 [2024-10-01 12:37:36.609712] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.239 [2024-10-01 12:37:36.609793] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.239 12:37:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:54.498 [2024-10-01 12:37:36.816087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.498 BaseBdev1 00:17:54.498 12:37:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:54.498 12:37:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:54.498 12:37:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:54.498 12:37:36 -- common/autotest_common.sh@889 -- # local i 00:17:54.498 12:37:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:54.498 12:37:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:54.498 12:37:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.498 12:37:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:54.756 [ 00:17:54.756 { 00:17:54.756 "name": "BaseBdev1", 00:17:54.756 "aliases": [ 00:17:54.756 "518c6bbb-27fa-4986-8bfe-77f26e529555" 00:17:54.756 ], 00:17:54.756 "product_name": "Malloc disk", 00:17:54.756 "block_size": 512, 00:17:54.756 "num_blocks": 65536, 00:17:54.756 "uuid": "518c6bbb-27fa-4986-8bfe-77f26e529555", 00:17:54.756 "assigned_rate_limits": { 00:17:54.756 "rw_ios_per_sec": 0, 00:17:54.757 "rw_mbytes_per_sec": 0, 00:17:54.757 "r_mbytes_per_sec": 0, 00:17:54.757 "w_mbytes_per_sec": 0 00:17:54.757 }, 00:17:54.757 "claimed": true, 00:17:54.757 "claim_type": "exclusive_write", 00:17:54.757 "zoned": false, 00:17:54.757 "supported_io_types": { 00:17:54.757 "read": true, 00:17:54.757 "write": true, 00:17:54.757 "unmap": true, 00:17:54.757 "write_zeroes": true, 00:17:54.757 "flush": true, 00:17:54.757 "reset": true, 00:17:54.757 "compare": false, 00:17:54.757 "compare_and_write": false, 00:17:54.757 "abort": true, 00:17:54.757 "nvme_admin": false, 00:17:54.757 "nvme_io": false 00:17:54.757 }, 00:17:54.757 "memory_domains": [ 00:17:54.757 { 00:17:54.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.757 "dma_device_type": 2 00:17:54.757 } 00:17:54.757 ], 00:17:54.757 "driver_specific": {} 00:17:54.757 } 00:17:54.757 ] 00:17:54.757 12:37:37 -- common/autotest_common.sh@895 -- # return 0 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.757 12:37:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.015 12:37:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.015 "name": "Existed_Raid", 00:17:55.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.015 "strip_size_kb": 64, 00:17:55.015 "state": "configuring", 00:17:55.015 "raid_level": "raid0", 00:17:55.015 "superblock": false, 00:17:55.015 "num_base_bdevs": 2, 00:17:55.015 "num_base_bdevs_discovered": 1, 00:17:55.015 "num_base_bdevs_operational": 2, 00:17:55.015 "base_bdevs_list": [ 00:17:55.015 { 00:17:55.015 "name": "BaseBdev1", 00:17:55.015 "uuid": "518c6bbb-27fa-4986-8bfe-77f26e529555", 00:17:55.015 "is_configured": true, 00:17:55.015 "data_offset": 0, 00:17:55.015 "data_size": 65536 00:17:55.015 }, 00:17:55.015 { 00:17:55.015 "name": "BaseBdev2", 00:17:55.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.015 "is_configured": false, 00:17:55.015 "data_offset": 0, 00:17:55.015 "data_size": 0 00:17:55.015 } 00:17:55.015 ] 00:17:55.015 }' 00:17:55.015 12:37:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.015 12:37:37 -- common/autotest_common.sh@10 -- # set +x 00:17:55.583 12:37:37 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:55.583 [2024-10-01 12:37:38.054388] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.583 [2024-10-01 12:37:38.054594] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:55.583 12:37:38 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:55.583 12:37:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:55.843 [2024-10-01 12:37:38.230166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.843 [2024-10-01 12:37:38.232201] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.843 [2024-10-01 12:37:38.232381] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.843 12:37:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.102 12:37:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.102 "name": "Existed_Raid", 00:17:56.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.102 "strip_size_kb": 64, 00:17:56.102 "state": "configuring", 00:17:56.102 "raid_level": "raid0", 00:17:56.102 "superblock": false, 00:17:56.102 "num_base_bdevs": 2, 00:17:56.102 "num_base_bdevs_discovered": 1, 00:17:56.102 "num_base_bdevs_operational": 2, 00:17:56.102 "base_bdevs_list": [ 00:17:56.102 { 00:17:56.102 "name": "BaseBdev1", 00:17:56.102 "uuid": "518c6bbb-27fa-4986-8bfe-77f26e529555", 00:17:56.102 "is_configured": true, 00:17:56.102 "data_offset": 0, 00:17:56.102 "data_size": 65536 00:17:56.102 }, 00:17:56.102 { 00:17:56.102 "name": "BaseBdev2", 00:17:56.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.102 "is_configured": false, 00:17:56.102 "data_offset": 0, 00:17:56.102 "data_size": 0 00:17:56.102 } 00:17:56.102 ] 00:17:56.102 }' 00:17:56.102 12:37:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.102 12:37:38 -- common/autotest_common.sh@10 -- # set +x 00:17:56.670 12:37:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:56.670 [2024-10-01 12:37:39.130101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.670 [2024-10-01 12:37:39.130354] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:56.670 [2024-10-01 12:37:39.130405] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:56.670 [2024-10-01 12:37:39.130644] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:56.670 [2024-10-01 12:37:39.131088] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:56.670 [2024-10-01 12:37:39.131209] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:56.670 [2024-10-01 12:37:39.131593] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.670 BaseBdev2 00:17:56.670 12:37:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:56.670 12:37:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:56.670 12:37:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:56.670 12:37:39 -- common/autotest_common.sh@889 -- # local i 00:17:56.670 12:37:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:56.670 12:37:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:56.670 12:37:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.928 12:37:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:57.186 [ 00:17:57.186 { 00:17:57.186 "name": "BaseBdev2", 00:17:57.186 "aliases": [ 00:17:57.186 "5c24decd-8434-4ad5-98a7-2f7518b77394" 00:17:57.186 ], 00:17:57.186 "product_name": "Malloc disk", 00:17:57.186 "block_size": 512, 00:17:57.186 "num_blocks": 65536, 00:17:57.186 "uuid": "5c24decd-8434-4ad5-98a7-2f7518b77394", 00:17:57.186 "assigned_rate_limits": { 00:17:57.186 "rw_ios_per_sec": 0, 00:17:57.186 "rw_mbytes_per_sec": 0, 00:17:57.186 "r_mbytes_per_sec": 0, 00:17:57.186 "w_mbytes_per_sec": 0 00:17:57.186 }, 00:17:57.186 "claimed": true, 00:17:57.186 "claim_type": "exclusive_write", 00:17:57.186 "zoned": false, 00:17:57.186 "supported_io_types": { 00:17:57.186 "read": true, 00:17:57.186 "write": true, 00:17:57.186 "unmap": true, 00:17:57.186 "write_zeroes": true, 00:17:57.186 "flush": true, 00:17:57.186 "reset": true, 00:17:57.186 "compare": false, 00:17:57.186 "compare_and_write": false, 00:17:57.186 "abort": true, 00:17:57.186 "nvme_admin": false, 00:17:57.186 "nvme_io": false 00:17:57.186 }, 00:17:57.186 "memory_domains": [ 00:17:57.186 { 00:17:57.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.186 "dma_device_type": 2 00:17:57.186 } 00:17:57.186 ], 00:17:57.186 "driver_specific": {} 00:17:57.186 } 00:17:57.186 ] 00:17:57.186 12:37:39 -- common/autotest_common.sh@895 -- # return 0 00:17:57.186 12:37:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:57.186 12:37:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:57.186 12:37:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:17:57.186 12:37:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:57.186 12:37:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:57.186 12:37:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:57.186 12:37:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:57.186 12:37:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:57.186 12:37:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.187 12:37:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.187 12:37:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.187 12:37:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.187 12:37:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.187 12:37:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.187 12:37:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.187 "name": "Existed_Raid", 00:17:57.187 "uuid": "cac36fe7-1221-48d1-b544-4a4afaa2d713", 00:17:57.187 "strip_size_kb": 64, 00:17:57.187 "state": "online", 00:17:57.187 "raid_level": "raid0", 00:17:57.187 "superblock": false, 00:17:57.187 "num_base_bdevs": 2, 00:17:57.187 "num_base_bdevs_discovered": 2, 00:17:57.187 "num_base_bdevs_operational": 2, 00:17:57.187 "base_bdevs_list": [ 00:17:57.187 { 00:17:57.187 "name": "BaseBdev1", 00:17:57.187 "uuid": "518c6bbb-27fa-4986-8bfe-77f26e529555", 00:17:57.187 "is_configured": true, 00:17:57.187 "data_offset": 0, 00:17:57.187 "data_size": 65536 00:17:57.187 }, 00:17:57.187 { 00:17:57.187 "name": "BaseBdev2", 00:17:57.187 "uuid": "5c24decd-8434-4ad5-98a7-2f7518b77394", 00:17:57.187 "is_configured": true, 00:17:57.187 "data_offset": 0, 00:17:57.187 "data_size": 65536 00:17:57.187 } 00:17:57.187 ] 00:17:57.187 }' 00:17:57.187 12:37:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.187 12:37:39 -- common/autotest_common.sh@10 -- # set +x 00:17:57.754 12:37:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:58.013 [2024-10-01 12:37:40.416526] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:58.013 [2024-10-01 12:37:40.416695] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.013 [2024-10-01 12:37:40.416896] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.013 12:37:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.271 12:37:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.271 "name": "Existed_Raid", 00:17:58.271 "uuid": "cac36fe7-1221-48d1-b544-4a4afaa2d713", 00:17:58.271 "strip_size_kb": 64, 00:17:58.271 "state": "offline", 00:17:58.271 "raid_level": "raid0", 00:17:58.271 "superblock": false, 00:17:58.271 "num_base_bdevs": 2, 00:17:58.271 "num_base_bdevs_discovered": 1, 00:17:58.271 "num_base_bdevs_operational": 1, 00:17:58.271 "base_bdevs_list": [ 00:17:58.271 { 00:17:58.271 "name": null, 00:17:58.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.271 "is_configured": false, 00:17:58.271 "data_offset": 0, 00:17:58.271 "data_size": 65536 00:17:58.271 }, 00:17:58.271 { 00:17:58.271 "name": "BaseBdev2", 00:17:58.271 "uuid": "5c24decd-8434-4ad5-98a7-2f7518b77394", 00:17:58.271 "is_configured": true, 00:17:58.271 "data_offset": 0, 00:17:58.271 "data_size": 65536 00:17:58.271 } 00:17:58.271 ] 00:17:58.271 }' 00:17:58.271 12:37:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.271 12:37:40 -- common/autotest_common.sh@10 -- # set +x 00:17:58.839 12:37:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:58.839 12:37:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:58.839 12:37:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.839 12:37:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:59.098 12:37:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:59.098 12:37:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.098 12:37:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:59.098 [2024-10-01 12:37:41.547862] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:59.098 [2024-10-01 12:37:41.548076] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:17:59.357 12:37:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:59.357 12:37:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:59.357 12:37:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:59.357 12:37:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.357 12:37:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:59.357 12:37:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:59.357 12:37:41 -- bdev/bdev_raid.sh@287 -- # killprocess 112949 00:17:59.357 12:37:41 -- common/autotest_common.sh@926 -- # '[' -z 112949 ']' 00:17:59.357 12:37:41 -- common/autotest_common.sh@930 -- # kill -0 112949 00:17:59.357 12:37:41 -- common/autotest_common.sh@931 -- # uname 00:17:59.357 12:37:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:59.357 12:37:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112949 00:17:59.357 12:37:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:59.357 12:37:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:59.357 12:37:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112949' 00:17:59.357 killing process with pid 112949 00:17:59.357 12:37:41 -- common/autotest_common.sh@945 -- # kill 112949 00:17:59.357 [2024-10-01 12:37:41.872063] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:59.357 12:37:41 -- common/autotest_common.sh@950 -- # wait 112949 00:17:59.357 [2024-10-01 12:37:41.872303] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.734 ************************************ 00:18:00.734 END TEST raid_state_function_test 00:18:00.734 ************************************ 00:18:00.734 12:37:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:00.734 00:18:00.734 real 0m8.414s 00:18:00.734 user 0m13.995s 00:18:00.734 sys 0m1.265s 00:18:00.734 12:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.734 12:37:42 -- common/autotest_common.sh@10 -- # set +x 00:18:00.734 12:37:42 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:18:00.734 12:37:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:00.734 12:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:00.734 12:37:42 -- common/autotest_common.sh@10 -- # set +x 00:18:00.734 ************************************ 00:18:00.734 START TEST raid_state_function_test_sb 00:18:00.734 ************************************ 00:18:00.734 12:37:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=113251 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:00.734 Process raid pid: 113251 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113251' 00:18:00.734 12:37:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 113251 /var/tmp/spdk-raid.sock 00:18:00.734 12:37:43 -- common/autotest_common.sh@819 -- # '[' -z 113251 ']' 00:18:00.734 12:37:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:00.734 12:37:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:00.734 12:37:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:00.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:00.734 12:37:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:00.734 12:37:43 -- common/autotest_common.sh@10 -- # set +x 00:18:00.734 [2024-10-01 12:37:43.097609] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:00.734 [2024-10-01 12:37:43.097914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.734 [2024-10-01 12:37:43.265634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.992 [2024-10-01 12:37:43.414161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.251 [2024-10-01 12:37:43.566286] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.510 12:37:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:01.510 12:37:43 -- common/autotest_common.sh@852 -- # return 0 00:18:01.510 12:37:43 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:01.770 [2024-10-01 12:37:44.075678] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.770 [2024-10-01 12:37:44.075916] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.770 [2024-10-01 12:37:44.076004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.770 [2024-10-01 12:37:44.076053] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.770 "name": "Existed_Raid", 00:18:01.770 "uuid": "82ad34c4-5539-4ecb-89b7-958c6632f54b", 00:18:01.770 "strip_size_kb": 64, 00:18:01.770 "state": "configuring", 00:18:01.770 "raid_level": "raid0", 00:18:01.770 "superblock": true, 00:18:01.770 "num_base_bdevs": 2, 00:18:01.770 "num_base_bdevs_discovered": 0, 00:18:01.770 "num_base_bdevs_operational": 2, 00:18:01.770 "base_bdevs_list": [ 00:18:01.770 { 00:18:01.770 "name": "BaseBdev1", 00:18:01.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.770 "is_configured": false, 00:18:01.770 "data_offset": 0, 00:18:01.770 "data_size": 0 00:18:01.770 }, 00:18:01.770 { 00:18:01.770 "name": "BaseBdev2", 00:18:01.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.770 "is_configured": false, 00:18:01.770 "data_offset": 0, 00:18:01.770 "data_size": 0 00:18:01.770 } 00:18:01.770 ] 00:18:01.770 }' 00:18:01.770 12:37:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.770 12:37:44 -- common/autotest_common.sh@10 -- # set +x 00:18:02.376 12:37:44 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:02.635 [2024-10-01 12:37:44.974279] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:02.635 [2024-10-01 12:37:44.974453] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:02.635 12:37:44 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:02.635 [2024-10-01 12:37:45.158114] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:02.635 [2024-10-01 12:37:45.158310] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:02.635 [2024-10-01 12:37:45.158385] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.635 [2024-10-01 12:37:45.158437] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.894 12:37:45 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:02.894 [2024-10-01 12:37:45.359740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.894 BaseBdev1 00:18:02.894 12:37:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:02.894 12:37:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:02.894 12:37:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:02.894 12:37:45 -- common/autotest_common.sh@889 -- # local i 00:18:02.894 12:37:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:02.894 12:37:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:02.894 12:37:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.153 12:37:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:03.435 [ 00:18:03.435 { 00:18:03.435 "name": "BaseBdev1", 00:18:03.435 "aliases": [ 00:18:03.435 "a5f052c4-2b74-4a8c-be24-8e82e43c69c4" 00:18:03.435 ], 00:18:03.435 "product_name": "Malloc disk", 00:18:03.435 "block_size": 512, 00:18:03.435 "num_blocks": 65536, 00:18:03.435 "uuid": "a5f052c4-2b74-4a8c-be24-8e82e43c69c4", 00:18:03.435 "assigned_rate_limits": { 00:18:03.435 "rw_ios_per_sec": 0, 00:18:03.435 "rw_mbytes_per_sec": 0, 00:18:03.435 "r_mbytes_per_sec": 0, 00:18:03.435 "w_mbytes_per_sec": 0 00:18:03.435 }, 00:18:03.435 "claimed": true, 00:18:03.435 "claim_type": "exclusive_write", 00:18:03.435 "zoned": false, 00:18:03.435 "supported_io_types": { 00:18:03.435 "read": true, 00:18:03.435 "write": true, 00:18:03.435 "unmap": true, 00:18:03.435 "write_zeroes": true, 00:18:03.435 "flush": true, 00:18:03.435 "reset": true, 00:18:03.435 "compare": false, 00:18:03.435 "compare_and_write": false, 00:18:03.435 "abort": true, 00:18:03.435 "nvme_admin": false, 00:18:03.435 "nvme_io": false 00:18:03.435 }, 00:18:03.435 "memory_domains": [ 00:18:03.435 { 00:18:03.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.435 "dma_device_type": 2 00:18:03.435 } 00:18:03.435 ], 00:18:03.435 "driver_specific": {} 00:18:03.435 } 00:18:03.435 ] 00:18:03.435 12:37:45 -- common/autotest_common.sh@895 -- # return 0 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.435 "name": "Existed_Raid", 00:18:03.435 "uuid": "823e6806-032b-4b4c-bb08-99c6da960f6d", 00:18:03.435 "strip_size_kb": 64, 00:18:03.435 "state": "configuring", 00:18:03.435 "raid_level": "raid0", 00:18:03.435 "superblock": true, 00:18:03.435 "num_base_bdevs": 2, 00:18:03.435 "num_base_bdevs_discovered": 1, 00:18:03.435 "num_base_bdevs_operational": 2, 00:18:03.435 "base_bdevs_list": [ 00:18:03.435 { 00:18:03.435 "name": "BaseBdev1", 00:18:03.435 "uuid": "a5f052c4-2b74-4a8c-be24-8e82e43c69c4", 00:18:03.435 "is_configured": true, 00:18:03.435 "data_offset": 2048, 00:18:03.435 "data_size": 63488 00:18:03.435 }, 00:18:03.435 { 00:18:03.435 "name": "BaseBdev2", 00:18:03.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.435 "is_configured": false, 00:18:03.435 "data_offset": 0, 00:18:03.435 "data_size": 0 00:18:03.435 } 00:18:03.435 ] 00:18:03.435 }' 00:18:03.435 12:37:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.435 12:37:45 -- common/autotest_common.sh@10 -- # set +x 00:18:04.002 12:37:46 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:04.261 [2024-10-01 12:37:46.578063] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.261 [2024-10-01 12:37:46.578233] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:04.261 12:37:46 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:04.261 12:37:46 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:04.519 12:37:46 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:04.519 BaseBdev1 00:18:04.519 12:37:47 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:04.519 12:37:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:04.519 12:37:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:04.519 12:37:47 -- common/autotest_common.sh@889 -- # local i 00:18:04.519 12:37:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:04.519 12:37:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:04.519 12:37:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.778 12:37:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:05.037 [ 00:18:05.037 { 00:18:05.037 "name": "BaseBdev1", 00:18:05.037 "aliases": [ 00:18:05.037 "ce056724-ccf9-4b41-940b-ec576b87be3b" 00:18:05.037 ], 00:18:05.037 "product_name": "Malloc disk", 00:18:05.037 "block_size": 512, 00:18:05.037 "num_blocks": 65536, 00:18:05.037 "uuid": "ce056724-ccf9-4b41-940b-ec576b87be3b", 00:18:05.037 "assigned_rate_limits": { 00:18:05.037 "rw_ios_per_sec": 0, 00:18:05.037 "rw_mbytes_per_sec": 0, 00:18:05.037 "r_mbytes_per_sec": 0, 00:18:05.037 "w_mbytes_per_sec": 0 00:18:05.037 }, 00:18:05.037 "claimed": false, 00:18:05.037 "zoned": false, 00:18:05.037 "supported_io_types": { 00:18:05.037 "read": true, 00:18:05.037 "write": true, 00:18:05.037 "unmap": true, 00:18:05.037 "write_zeroes": true, 00:18:05.037 "flush": true, 00:18:05.037 "reset": true, 00:18:05.037 "compare": false, 00:18:05.037 "compare_and_write": false, 00:18:05.037 "abort": true, 00:18:05.037 "nvme_admin": false, 00:18:05.037 "nvme_io": false 00:18:05.037 }, 00:18:05.037 "memory_domains": [ 00:18:05.037 { 00:18:05.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.037 "dma_device_type": 2 00:18:05.037 } 00:18:05.037 ], 00:18:05.037 "driver_specific": {} 00:18:05.037 } 00:18:05.037 ] 00:18:05.037 12:37:47 -- common/autotest_common.sh@895 -- # return 0 00:18:05.037 12:37:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:05.297 [2024-10-01 12:37:47.571425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.297 [2024-10-01 12:37:47.573325] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.297 [2024-10-01 12:37:47.573491] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.297 "name": "Existed_Raid", 00:18:05.297 "uuid": "f9b53958-4be9-4838-9b38-54e45fa4ff6d", 00:18:05.297 "strip_size_kb": 64, 00:18:05.297 "state": "configuring", 00:18:05.297 "raid_level": "raid0", 00:18:05.297 "superblock": true, 00:18:05.297 "num_base_bdevs": 2, 00:18:05.297 "num_base_bdevs_discovered": 1, 00:18:05.297 "num_base_bdevs_operational": 2, 00:18:05.297 "base_bdevs_list": [ 00:18:05.297 { 00:18:05.297 "name": "BaseBdev1", 00:18:05.297 "uuid": "ce056724-ccf9-4b41-940b-ec576b87be3b", 00:18:05.297 "is_configured": true, 00:18:05.297 "data_offset": 2048, 00:18:05.297 "data_size": 63488 00:18:05.297 }, 00:18:05.297 { 00:18:05.297 "name": "BaseBdev2", 00:18:05.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.297 "is_configured": false, 00:18:05.297 "data_offset": 0, 00:18:05.297 "data_size": 0 00:18:05.297 } 00:18:05.297 ] 00:18:05.297 }' 00:18:05.297 12:37:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.297 12:37:47 -- common/autotest_common.sh@10 -- # set +x 00:18:05.864 12:37:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:06.123 [2024-10-01 12:37:48.493037] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.123 [2024-10-01 12:37:48.493370] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:06.123 [2024-10-01 12:37:48.493489] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:06.123 [2024-10-01 12:37:48.493645] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:06.123 [2024-10-01 12:37:48.494004] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:06.123 [2024-10-01 12:37:48.494074] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:06.123 [2024-10-01 12:37:48.494293] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.123 BaseBdev2 00:18:06.123 12:37:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:06.123 12:37:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:06.123 12:37:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:06.123 12:37:48 -- common/autotest_common.sh@889 -- # local i 00:18:06.123 12:37:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:06.123 12:37:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:06.123 12:37:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.382 12:37:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:06.382 [ 00:18:06.382 { 00:18:06.382 "name": "BaseBdev2", 00:18:06.382 "aliases": [ 00:18:06.382 "55993e47-9b63-46bd-9c75-80f9fb35cc81" 00:18:06.382 ], 00:18:06.382 "product_name": "Malloc disk", 00:18:06.382 "block_size": 512, 00:18:06.382 "num_blocks": 65536, 00:18:06.382 "uuid": "55993e47-9b63-46bd-9c75-80f9fb35cc81", 00:18:06.382 "assigned_rate_limits": { 00:18:06.382 "rw_ios_per_sec": 0, 00:18:06.382 "rw_mbytes_per_sec": 0, 00:18:06.382 "r_mbytes_per_sec": 0, 00:18:06.382 "w_mbytes_per_sec": 0 00:18:06.382 }, 00:18:06.382 "claimed": true, 00:18:06.382 "claim_type": "exclusive_write", 00:18:06.382 "zoned": false, 00:18:06.382 "supported_io_types": { 00:18:06.382 "read": true, 00:18:06.382 "write": true, 00:18:06.382 "unmap": true, 00:18:06.382 "write_zeroes": true, 00:18:06.382 "flush": true, 00:18:06.382 "reset": true, 00:18:06.382 "compare": false, 00:18:06.382 "compare_and_write": false, 00:18:06.382 "abort": true, 00:18:06.382 "nvme_admin": false, 00:18:06.382 "nvme_io": false 00:18:06.382 }, 00:18:06.382 "memory_domains": [ 00:18:06.382 { 00:18:06.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.382 "dma_device_type": 2 00:18:06.382 } 00:18:06.382 ], 00:18:06.382 "driver_specific": {} 00:18:06.382 } 00:18:06.382 ] 00:18:06.382 12:37:48 -- common/autotest_common.sh@895 -- # return 0 00:18:06.382 12:37:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:06.382 12:37:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:06.382 12:37:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:18:06.382 12:37:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.383 12:37:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:06.383 12:37:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:06.383 12:37:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:06.383 12:37:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:06.383 12:37:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.383 12:37:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.383 12:37:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.383 12:37:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.383 12:37:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.383 12:37:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.649 12:37:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.649 "name": "Existed_Raid", 00:18:06.649 "uuid": "f9b53958-4be9-4838-9b38-54e45fa4ff6d", 00:18:06.649 "strip_size_kb": 64, 00:18:06.649 "state": "online", 00:18:06.649 "raid_level": "raid0", 00:18:06.649 "superblock": true, 00:18:06.649 "num_base_bdevs": 2, 00:18:06.649 "num_base_bdevs_discovered": 2, 00:18:06.649 "num_base_bdevs_operational": 2, 00:18:06.649 "base_bdevs_list": [ 00:18:06.649 { 00:18:06.649 "name": "BaseBdev1", 00:18:06.649 "uuid": "ce056724-ccf9-4b41-940b-ec576b87be3b", 00:18:06.649 "is_configured": true, 00:18:06.649 "data_offset": 2048, 00:18:06.649 "data_size": 63488 00:18:06.649 }, 00:18:06.649 { 00:18:06.649 "name": "BaseBdev2", 00:18:06.649 "uuid": "55993e47-9b63-46bd-9c75-80f9fb35cc81", 00:18:06.649 "is_configured": true, 00:18:06.649 "data_offset": 2048, 00:18:06.649 "data_size": 63488 00:18:06.649 } 00:18:06.649 ] 00:18:06.649 }' 00:18:06.649 12:37:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.649 12:37:49 -- common/autotest_common.sh@10 -- # set +x 00:18:07.216 12:37:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:07.216 [2024-10-01 12:37:49.739332] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.216 [2024-10-01 12:37:49.739465] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.216 [2024-10-01 12:37:49.739615] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.475 12:37:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.734 12:37:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:07.734 "name": "Existed_Raid", 00:18:07.734 "uuid": "f9b53958-4be9-4838-9b38-54e45fa4ff6d", 00:18:07.734 "strip_size_kb": 64, 00:18:07.734 "state": "offline", 00:18:07.734 "raid_level": "raid0", 00:18:07.734 "superblock": true, 00:18:07.734 "num_base_bdevs": 2, 00:18:07.734 "num_base_bdevs_discovered": 1, 00:18:07.734 "num_base_bdevs_operational": 1, 00:18:07.734 "base_bdevs_list": [ 00:18:07.734 { 00:18:07.734 "name": null, 00:18:07.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.734 "is_configured": false, 00:18:07.734 "data_offset": 2048, 00:18:07.734 "data_size": 63488 00:18:07.734 }, 00:18:07.734 { 00:18:07.734 "name": "BaseBdev2", 00:18:07.734 "uuid": "55993e47-9b63-46bd-9c75-80f9fb35cc81", 00:18:07.734 "is_configured": true, 00:18:07.734 "data_offset": 2048, 00:18:07.734 "data_size": 63488 00:18:07.734 } 00:18:07.734 ] 00:18:07.734 }' 00:18:07.734 12:37:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:07.734 12:37:50 -- common/autotest_common.sh@10 -- # set +x 00:18:08.302 12:37:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:08.302 12:37:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:08.302 12:37:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:08.302 12:37:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.302 12:37:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:08.302 12:37:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:08.302 12:37:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:08.562 [2024-10-01 12:37:50.921814] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:08.562 [2024-10-01 12:37:50.922013] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:08.562 12:37:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:08.562 12:37:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:08.562 12:37:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.562 12:37:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:08.821 12:37:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:08.821 12:37:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:08.821 12:37:51 -- bdev/bdev_raid.sh@287 -- # killprocess 113251 00:18:08.821 12:37:51 -- common/autotest_common.sh@926 -- # '[' -z 113251 ']' 00:18:08.821 12:37:51 -- common/autotest_common.sh@930 -- # kill -0 113251 00:18:08.821 12:37:51 -- common/autotest_common.sh@931 -- # uname 00:18:08.821 12:37:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:08.821 12:37:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113251 00:18:08.821 12:37:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:08.821 12:37:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:08.821 12:37:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113251' 00:18:08.821 killing process with pid 113251 00:18:08.821 12:37:51 -- common/autotest_common.sh@945 -- # kill 113251 00:18:08.821 [2024-10-01 12:37:51.217584] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.821 12:37:51 -- common/autotest_common.sh@950 -- # wait 113251 00:18:08.821 [2024-10-01 12:37:51.217732] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.759 ************************************ 00:18:09.759 END TEST raid_state_function_test_sb 00:18:09.759 ************************************ 00:18:09.759 12:37:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:09.759 00:18:09.759 real 0m9.257s 00:18:09.759 user 0m15.331s 00:18:09.759 sys 0m1.446s 00:18:09.759 12:37:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.759 12:37:52 -- common/autotest_common.sh@10 -- # set +x 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:18:10.018 12:37:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:10.018 12:37:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:10.018 12:37:52 -- common/autotest_common.sh@10 -- # set +x 00:18:10.018 ************************************ 00:18:10.018 START TEST raid_superblock_test 00:18:10.018 ************************************ 00:18:10.018 12:37:52 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@357 -- # raid_pid=113563 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:10.018 12:37:52 -- bdev/bdev_raid.sh@358 -- # waitforlisten 113563 /var/tmp/spdk-raid.sock 00:18:10.018 12:37:52 -- common/autotest_common.sh@819 -- # '[' -z 113563 ']' 00:18:10.019 12:37:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:10.019 12:37:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:10.019 12:37:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:10.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:10.019 12:37:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:10.019 12:37:52 -- common/autotest_common.sh@10 -- # set +x 00:18:10.019 [2024-10-01 12:37:52.431356] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:10.019 [2024-10-01 12:37:52.431491] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113563 ] 00:18:10.278 [2024-10-01 12:37:52.598163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.278 [2024-10-01 12:37:52.746718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.537 [2024-10-01 12:37:52.896311] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.795 12:37:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:10.795 12:37:53 -- common/autotest_common.sh@852 -- # return 0 00:18:10.795 12:37:53 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:10.795 12:37:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:10.795 12:37:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:10.795 12:37:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:10.795 12:37:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:10.795 12:37:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.795 12:37:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.795 12:37:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.795 12:37:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:11.054 malloc1 00:18:11.054 12:37:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:11.312 [2024-10-01 12:37:53.597198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:11.312 [2024-10-01 12:37:53.597276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.312 [2024-10-01 12:37:53.597317] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:11.312 [2024-10-01 12:37:53.597359] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.312 [2024-10-01 12:37:53.599561] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.312 [2024-10-01 12:37:53.599629] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:11.312 pt1 00:18:11.312 12:37:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:11.312 12:37:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:11.312 12:37:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:11.312 12:37:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:11.313 12:37:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:11.313 12:37:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:11.313 12:37:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:11.313 12:37:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:11.313 12:37:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:11.313 malloc2 00:18:11.571 12:37:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.571 [2024-10-01 12:37:54.014275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.571 [2024-10-01 12:37:54.014347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.571 [2024-10-01 12:37:54.014384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:11.571 [2024-10-01 12:37:54.014431] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.571 [2024-10-01 12:37:54.016576] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.571 [2024-10-01 12:37:54.016625] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.571 pt2 00:18:11.571 12:37:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:11.571 12:37:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:11.571 12:37:54 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:11.831 [2024-10-01 12:37:54.174176] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:11.831 [2024-10-01 12:37:54.176038] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.831 [2024-10-01 12:37:54.176187] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:18:11.831 [2024-10-01 12:37:54.176197] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:11.831 [2024-10-01 12:37:54.176298] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:11.831 [2024-10-01 12:37:54.176609] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:18:11.831 [2024-10-01 12:37:54.176624] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:18:11.831 [2024-10-01 12:37:54.176753] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.831 12:37:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.090 12:37:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.090 "name": "raid_bdev1", 00:18:12.090 "uuid": "6bd08316-5f1a-4cf2-a015-3ef3b9156b74", 00:18:12.090 "strip_size_kb": 64, 00:18:12.090 "state": "online", 00:18:12.090 "raid_level": "raid0", 00:18:12.090 "superblock": true, 00:18:12.090 "num_base_bdevs": 2, 00:18:12.090 "num_base_bdevs_discovered": 2, 00:18:12.090 "num_base_bdevs_operational": 2, 00:18:12.090 "base_bdevs_list": [ 00:18:12.090 { 00:18:12.090 "name": "pt1", 00:18:12.090 "uuid": "a0f33dc3-8987-5c8b-be64-1a87bae0d6e6", 00:18:12.090 "is_configured": true, 00:18:12.090 "data_offset": 2048, 00:18:12.090 "data_size": 63488 00:18:12.090 }, 00:18:12.090 { 00:18:12.090 "name": "pt2", 00:18:12.090 "uuid": "9e3c4b56-b31c-5b6c-adf2-6f7b601f23cf", 00:18:12.090 "is_configured": true, 00:18:12.090 "data_offset": 2048, 00:18:12.090 "data_size": 63488 00:18:12.090 } 00:18:12.090 ] 00:18:12.090 }' 00:18:12.090 12:37:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.090 12:37:54 -- common/autotest_common.sh@10 -- # set +x 00:18:12.659 12:37:54 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:12.659 12:37:54 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:12.659 [2024-10-01 12:37:55.084959] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.659 12:37:55 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=6bd08316-5f1a-4cf2-a015-3ef3b9156b74 00:18:12.659 12:37:55 -- bdev/bdev_raid.sh@380 -- # '[' -z 6bd08316-5f1a-4cf2-a015-3ef3b9156b74 ']' 00:18:12.659 12:37:55 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:12.918 [2024-10-01 12:37:55.268515] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.918 [2024-10-01 12:37:55.268541] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.918 [2024-10-01 12:37:55.268602] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.918 [2024-10-01 12:37:55.268646] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.918 [2024-10-01 12:37:55.268655] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:18:12.918 12:37:55 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.918 12:37:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:13.177 12:37:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:13.177 12:37:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:13.177 12:37:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:13.177 12:37:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:13.177 12:37:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:13.177 12:37:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:13.437 12:37:55 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:13.437 12:37:55 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:13.695 12:37:55 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:13.695 12:37:55 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:18:13.695 12:37:55 -- common/autotest_common.sh@640 -- # local es=0 00:18:13.695 12:37:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:18:13.695 12:37:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.695 12:37:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:13.695 12:37:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.695 12:37:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:13.695 12:37:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.695 12:37:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:13.695 12:37:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.695 12:37:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:13.695 12:37:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:18:13.695 [2024-10-01 12:37:56.155234] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:13.695 [2024-10-01 12:37:56.157024] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:13.695 [2024-10-01 12:37:56.157096] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:13.695 [2024-10-01 12:37:56.157157] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:13.695 [2024-10-01 12:37:56.157186] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.696 [2024-10-01 12:37:56.157195] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:18:13.696 request: 00:18:13.696 { 00:18:13.696 "name": "raid_bdev1", 00:18:13.696 "raid_level": "raid0", 00:18:13.696 "base_bdevs": [ 00:18:13.696 "malloc1", 00:18:13.696 "malloc2" 00:18:13.696 ], 00:18:13.696 "superblock": false, 00:18:13.696 "strip_size_kb": 64, 00:18:13.696 "method": "bdev_raid_create", 00:18:13.696 "req_id": 1 00:18:13.696 } 00:18:13.696 Got JSON-RPC error response 00:18:13.696 response: 00:18:13.696 { 00:18:13.696 "code": -17, 00:18:13.696 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:13.696 } 00:18:13.696 12:37:56 -- common/autotest_common.sh@643 -- # es=1 00:18:13.696 12:37:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:13.696 12:37:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:13.696 12:37:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:13.696 12:37:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:13.696 12:37:56 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.954 12:37:56 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:13.954 12:37:56 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:13.954 12:37:56 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:14.213 [2024-10-01 12:37:56.514674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:14.214 [2024-10-01 12:37:56.514764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.214 [2024-10-01 12:37:56.514814] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:18:14.214 [2024-10-01 12:37:56.514838] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.214 [2024-10-01 12:37:56.516987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.214 [2024-10-01 12:37:56.517040] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:14.214 [2024-10-01 12:37:56.517151] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:14.214 [2024-10-01 12:37:56.517201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:14.214 pt1 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.214 "name": "raid_bdev1", 00:18:14.214 "uuid": "6bd08316-5f1a-4cf2-a015-3ef3b9156b74", 00:18:14.214 "strip_size_kb": 64, 00:18:14.214 "state": "configuring", 00:18:14.214 "raid_level": "raid0", 00:18:14.214 "superblock": true, 00:18:14.214 "num_base_bdevs": 2, 00:18:14.214 "num_base_bdevs_discovered": 1, 00:18:14.214 "num_base_bdevs_operational": 2, 00:18:14.214 "base_bdevs_list": [ 00:18:14.214 { 00:18:14.214 "name": "pt1", 00:18:14.214 "uuid": "a0f33dc3-8987-5c8b-be64-1a87bae0d6e6", 00:18:14.214 "is_configured": true, 00:18:14.214 "data_offset": 2048, 00:18:14.214 "data_size": 63488 00:18:14.214 }, 00:18:14.214 { 00:18:14.214 "name": null, 00:18:14.214 "uuid": "9e3c4b56-b31c-5b6c-adf2-6f7b601f23cf", 00:18:14.214 "is_configured": false, 00:18:14.214 "data_offset": 2048, 00:18:14.214 "data_size": 63488 00:18:14.214 } 00:18:14.214 ] 00:18:14.214 }' 00:18:14.214 12:37:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.214 12:37:56 -- common/autotest_common.sh@10 -- # set +x 00:18:14.781 12:37:57 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:18:14.781 12:37:57 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:14.781 12:37:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:14.781 12:37:57 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:15.039 [2024-10-01 12:37:57.385600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:15.039 [2024-10-01 12:37:57.385691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.039 [2024-10-01 12:37:57.385725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:15.039 [2024-10-01 12:37:57.385752] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.039 [2024-10-01 12:37:57.386161] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.039 [2024-10-01 12:37:57.386205] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:15.039 [2024-10-01 12:37:57.386293] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:15.039 [2024-10-01 12:37:57.386313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:15.039 [2024-10-01 12:37:57.386404] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:18:15.039 [2024-10-01 12:37:57.386411] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:15.039 [2024-10-01 12:37:57.386512] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:15.039 [2024-10-01 12:37:57.386770] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:18:15.039 [2024-10-01 12:37:57.386790] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:18:15.039 [2024-10-01 12:37:57.386907] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.039 pt2 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.039 12:37:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.298 12:37:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.298 "name": "raid_bdev1", 00:18:15.298 "uuid": "6bd08316-5f1a-4cf2-a015-3ef3b9156b74", 00:18:15.298 "strip_size_kb": 64, 00:18:15.298 "state": "online", 00:18:15.298 "raid_level": "raid0", 00:18:15.298 "superblock": true, 00:18:15.298 "num_base_bdevs": 2, 00:18:15.298 "num_base_bdevs_discovered": 2, 00:18:15.298 "num_base_bdevs_operational": 2, 00:18:15.298 "base_bdevs_list": [ 00:18:15.298 { 00:18:15.298 "name": "pt1", 00:18:15.298 "uuid": "a0f33dc3-8987-5c8b-be64-1a87bae0d6e6", 00:18:15.298 "is_configured": true, 00:18:15.298 "data_offset": 2048, 00:18:15.298 "data_size": 63488 00:18:15.298 }, 00:18:15.298 { 00:18:15.298 "name": "pt2", 00:18:15.298 "uuid": "9e3c4b56-b31c-5b6c-adf2-6f7b601f23cf", 00:18:15.298 "is_configured": true, 00:18:15.298 "data_offset": 2048, 00:18:15.298 "data_size": 63488 00:18:15.298 } 00:18:15.298 ] 00:18:15.298 }' 00:18:15.298 12:37:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.298 12:37:57 -- common/autotest_common.sh@10 -- # set +x 00:18:15.866 12:37:58 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:15.866 12:37:58 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:15.866 [2024-10-01 12:37:58.280459] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.866 12:37:58 -- bdev/bdev_raid.sh@430 -- # '[' 6bd08316-5f1a-4cf2-a015-3ef3b9156b74 '!=' 6bd08316-5f1a-4cf2-a015-3ef3b9156b74 ']' 00:18:15.866 12:37:58 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:15.866 12:37:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:15.866 12:37:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:15.866 12:37:58 -- bdev/bdev_raid.sh@511 -- # killprocess 113563 00:18:15.866 12:37:58 -- common/autotest_common.sh@926 -- # '[' -z 113563 ']' 00:18:15.866 12:37:58 -- common/autotest_common.sh@930 -- # kill -0 113563 00:18:15.866 12:37:58 -- common/autotest_common.sh@931 -- # uname 00:18:15.866 12:37:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:15.866 12:37:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113563 00:18:15.866 12:37:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:15.866 12:37:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:15.866 12:37:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113563' 00:18:15.866 killing process with pid 113563 00:18:15.866 12:37:58 -- common/autotest_common.sh@945 -- # kill 113563 00:18:15.866 [2024-10-01 12:37:58.333513] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.866 [2024-10-01 12:37:58.333573] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.866 [2024-10-01 12:37:58.333617] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.866 [2024-10-01 12:37:58.333625] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:18:15.866 12:37:58 -- common/autotest_common.sh@950 -- # wait 113563 00:18:16.144 [2024-10-01 12:37:58.495389] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.083 12:37:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:17.083 00:18:17.083 real 0m7.195s 00:18:17.083 user 0m11.544s 00:18:17.083 sys 0m1.209s 00:18:17.083 12:37:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.083 ************************************ 00:18:17.083 END TEST raid_superblock_test 00:18:17.083 ************************************ 00:18:17.083 12:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:18:17.341 12:37:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:17.341 12:37:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:17.341 12:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.341 ************************************ 00:18:17.341 START TEST raid_state_function_test 00:18:17.341 ************************************ 00:18:17.341 12:37:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=113801 00:18:17.341 Process raid pid: 113801 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113801' 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:17.341 12:37:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 113801 /var/tmp/spdk-raid.sock 00:18:17.341 12:37:59 -- common/autotest_common.sh@819 -- # '[' -z 113801 ']' 00:18:17.341 12:37:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:17.341 12:37:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:17.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:17.341 12:37:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:17.341 12:37:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:17.341 12:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.341 [2024-10-01 12:37:59.702406] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:17.341 [2024-10-01 12:37:59.702544] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.341 [2024-10-01 12:37:59.867761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.600 [2024-10-01 12:38:00.025616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.863 [2024-10-01 12:38:00.178251] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.123 12:38:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:18.123 12:38:00 -- common/autotest_common.sh@852 -- # return 0 00:18:18.123 12:38:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:18.382 [2024-10-01 12:38:00.674845] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.382 [2024-10-01 12:38:00.674919] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.382 [2024-10-01 12:38:00.674930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.382 [2024-10-01 12:38:00.674946] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.382 12:38:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:18.382 12:38:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:18.382 12:38:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:18.382 12:38:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:18.383 12:38:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:18.383 12:38:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:18.383 12:38:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.383 12:38:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.383 12:38:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.383 12:38:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.383 12:38:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.383 12:38:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.383 12:38:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.383 "name": "Existed_Raid", 00:18:18.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.383 "strip_size_kb": 64, 00:18:18.383 "state": "configuring", 00:18:18.383 "raid_level": "concat", 00:18:18.383 "superblock": false, 00:18:18.383 "num_base_bdevs": 2, 00:18:18.383 "num_base_bdevs_discovered": 0, 00:18:18.383 "num_base_bdevs_operational": 2, 00:18:18.383 "base_bdevs_list": [ 00:18:18.383 { 00:18:18.383 "name": "BaseBdev1", 00:18:18.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.383 "is_configured": false, 00:18:18.383 "data_offset": 0, 00:18:18.383 "data_size": 0 00:18:18.383 }, 00:18:18.383 { 00:18:18.383 "name": "BaseBdev2", 00:18:18.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.383 "is_configured": false, 00:18:18.383 "data_offset": 0, 00:18:18.383 "data_size": 0 00:18:18.383 } 00:18:18.383 ] 00:18:18.383 }' 00:18:18.383 12:38:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.383 12:38:00 -- common/autotest_common.sh@10 -- # set +x 00:18:18.952 12:38:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:19.211 [2024-10-01 12:38:01.577573] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:19.211 [2024-10-01 12:38:01.577614] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:19.211 12:38:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:19.469 [2024-10-01 12:38:01.757339] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:19.469 [2024-10-01 12:38:01.757411] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:19.469 [2024-10-01 12:38:01.757420] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:19.469 [2024-10-01 12:38:01.757457] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.469 12:38:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:19.469 [2024-10-01 12:38:01.967924] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.469 BaseBdev1 00:18:19.469 12:38:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:19.469 12:38:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:19.469 12:38:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:19.469 12:38:01 -- common/autotest_common.sh@889 -- # local i 00:18:19.469 12:38:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:19.469 12:38:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:19.470 12:38:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:19.729 12:38:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.989 [ 00:18:19.989 { 00:18:19.989 "name": "BaseBdev1", 00:18:19.989 "aliases": [ 00:18:19.989 "d0fd7e73-73d4-45cb-88bf-5e0a334d7f65" 00:18:19.989 ], 00:18:19.989 "product_name": "Malloc disk", 00:18:19.989 "block_size": 512, 00:18:19.989 "num_blocks": 65536, 00:18:19.989 "uuid": "d0fd7e73-73d4-45cb-88bf-5e0a334d7f65", 00:18:19.989 "assigned_rate_limits": { 00:18:19.989 "rw_ios_per_sec": 0, 00:18:19.989 "rw_mbytes_per_sec": 0, 00:18:19.989 "r_mbytes_per_sec": 0, 00:18:19.989 "w_mbytes_per_sec": 0 00:18:19.989 }, 00:18:19.989 "claimed": true, 00:18:19.989 "claim_type": "exclusive_write", 00:18:19.989 "zoned": false, 00:18:19.989 "supported_io_types": { 00:18:19.989 "read": true, 00:18:19.989 "write": true, 00:18:19.989 "unmap": true, 00:18:19.989 "write_zeroes": true, 00:18:19.989 "flush": true, 00:18:19.989 "reset": true, 00:18:19.989 "compare": false, 00:18:19.989 "compare_and_write": false, 00:18:19.989 "abort": true, 00:18:19.989 "nvme_admin": false, 00:18:19.989 "nvme_io": false 00:18:19.989 }, 00:18:19.989 "memory_domains": [ 00:18:19.989 { 00:18:19.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.989 "dma_device_type": 2 00:18:19.989 } 00:18:19.989 ], 00:18:19.989 "driver_specific": {} 00:18:19.989 } 00:18:19.989 ] 00:18:19.989 12:38:02 -- common/autotest_common.sh@895 -- # return 0 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.989 12:38:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.248 12:38:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.248 "name": "Existed_Raid", 00:18:20.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.248 "strip_size_kb": 64, 00:18:20.248 "state": "configuring", 00:18:20.248 "raid_level": "concat", 00:18:20.248 "superblock": false, 00:18:20.248 "num_base_bdevs": 2, 00:18:20.248 "num_base_bdevs_discovered": 1, 00:18:20.248 "num_base_bdevs_operational": 2, 00:18:20.248 "base_bdevs_list": [ 00:18:20.248 { 00:18:20.248 "name": "BaseBdev1", 00:18:20.248 "uuid": "d0fd7e73-73d4-45cb-88bf-5e0a334d7f65", 00:18:20.248 "is_configured": true, 00:18:20.248 "data_offset": 0, 00:18:20.248 "data_size": 65536 00:18:20.248 }, 00:18:20.248 { 00:18:20.248 "name": "BaseBdev2", 00:18:20.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.248 "is_configured": false, 00:18:20.248 "data_offset": 0, 00:18:20.248 "data_size": 0 00:18:20.248 } 00:18:20.248 ] 00:18:20.248 }' 00:18:20.248 12:38:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.248 12:38:02 -- common/autotest_common.sh@10 -- # set +x 00:18:20.817 12:38:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:20.817 [2024-10-01 12:38:03.210807] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.817 [2024-10-01 12:38:03.210876] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:20.817 12:38:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:20.817 12:38:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:21.075 [2024-10-01 12:38:03.390576] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.075 [2024-10-01 12:38:03.392487] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.075 [2024-10-01 12:38:03.392541] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.075 "name": "Existed_Raid", 00:18:21.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.075 "strip_size_kb": 64, 00:18:21.075 "state": "configuring", 00:18:21.075 "raid_level": "concat", 00:18:21.075 "superblock": false, 00:18:21.075 "num_base_bdevs": 2, 00:18:21.075 "num_base_bdevs_discovered": 1, 00:18:21.075 "num_base_bdevs_operational": 2, 00:18:21.075 "base_bdevs_list": [ 00:18:21.075 { 00:18:21.075 "name": "BaseBdev1", 00:18:21.075 "uuid": "d0fd7e73-73d4-45cb-88bf-5e0a334d7f65", 00:18:21.075 "is_configured": true, 00:18:21.075 "data_offset": 0, 00:18:21.075 "data_size": 65536 00:18:21.075 }, 00:18:21.075 { 00:18:21.075 "name": "BaseBdev2", 00:18:21.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.075 "is_configured": false, 00:18:21.075 "data_offset": 0, 00:18:21.075 "data_size": 0 00:18:21.075 } 00:18:21.075 ] 00:18:21.075 }' 00:18:21.075 12:38:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.075 12:38:03 -- common/autotest_common.sh@10 -- # set +x 00:18:21.644 12:38:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:21.903 [2024-10-01 12:38:04.321689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.903 [2024-10-01 12:38:04.321734] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:18:21.903 [2024-10-01 12:38:04.321741] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:21.903 [2024-10-01 12:38:04.321847] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:18:21.903 [2024-10-01 12:38:04.322127] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:18:21.903 [2024-10-01 12:38:04.322137] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:18:21.903 [2024-10-01 12:38:04.322407] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.903 BaseBdev2 00:18:21.903 12:38:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:21.903 12:38:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:21.903 12:38:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:21.903 12:38:04 -- common/autotest_common.sh@889 -- # local i 00:18:21.903 12:38:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:21.903 12:38:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:21.903 12:38:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.162 12:38:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:22.162 [ 00:18:22.162 { 00:18:22.162 "name": "BaseBdev2", 00:18:22.162 "aliases": [ 00:18:22.162 "928274d2-c2a3-4926-a428-a883154c0e74" 00:18:22.162 ], 00:18:22.162 "product_name": "Malloc disk", 00:18:22.162 "block_size": 512, 00:18:22.162 "num_blocks": 65536, 00:18:22.162 "uuid": "928274d2-c2a3-4926-a428-a883154c0e74", 00:18:22.162 "assigned_rate_limits": { 00:18:22.162 "rw_ios_per_sec": 0, 00:18:22.162 "rw_mbytes_per_sec": 0, 00:18:22.162 "r_mbytes_per_sec": 0, 00:18:22.162 "w_mbytes_per_sec": 0 00:18:22.162 }, 00:18:22.162 "claimed": true, 00:18:22.162 "claim_type": "exclusive_write", 00:18:22.162 "zoned": false, 00:18:22.162 "supported_io_types": { 00:18:22.162 "read": true, 00:18:22.162 "write": true, 00:18:22.162 "unmap": true, 00:18:22.162 "write_zeroes": true, 00:18:22.162 "flush": true, 00:18:22.162 "reset": true, 00:18:22.162 "compare": false, 00:18:22.162 "compare_and_write": false, 00:18:22.162 "abort": true, 00:18:22.162 "nvme_admin": false, 00:18:22.162 "nvme_io": false 00:18:22.162 }, 00:18:22.162 "memory_domains": [ 00:18:22.162 { 00:18:22.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.162 "dma_device_type": 2 00:18:22.162 } 00:18:22.162 ], 00:18:22.162 "driver_specific": {} 00:18:22.162 } 00:18:22.162 ] 00:18:22.162 12:38:04 -- common/autotest_common.sh@895 -- # return 0 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.162 12:38:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.421 12:38:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.421 "name": "Existed_Raid", 00:18:22.421 "uuid": "f90f4df4-da64-45b2-a086-571426b25c24", 00:18:22.421 "strip_size_kb": 64, 00:18:22.421 "state": "online", 00:18:22.421 "raid_level": "concat", 00:18:22.421 "superblock": false, 00:18:22.421 "num_base_bdevs": 2, 00:18:22.421 "num_base_bdevs_discovered": 2, 00:18:22.421 "num_base_bdevs_operational": 2, 00:18:22.421 "base_bdevs_list": [ 00:18:22.421 { 00:18:22.421 "name": "BaseBdev1", 00:18:22.421 "uuid": "d0fd7e73-73d4-45cb-88bf-5e0a334d7f65", 00:18:22.421 "is_configured": true, 00:18:22.421 "data_offset": 0, 00:18:22.421 "data_size": 65536 00:18:22.421 }, 00:18:22.421 { 00:18:22.421 "name": "BaseBdev2", 00:18:22.421 "uuid": "928274d2-c2a3-4926-a428-a883154c0e74", 00:18:22.421 "is_configured": true, 00:18:22.421 "data_offset": 0, 00:18:22.421 "data_size": 65536 00:18:22.421 } 00:18:22.421 ] 00:18:22.421 }' 00:18:22.421 12:38:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.421 12:38:04 -- common/autotest_common.sh@10 -- # set +x 00:18:22.989 12:38:05 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:22.989 [2024-10-01 12:38:05.504081] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:22.989 [2024-10-01 12:38:05.504111] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.989 [2024-10-01 12:38:05.504183] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.247 12:38:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.506 12:38:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:23.506 "name": "Existed_Raid", 00:18:23.506 "uuid": "f90f4df4-da64-45b2-a086-571426b25c24", 00:18:23.506 "strip_size_kb": 64, 00:18:23.506 "state": "offline", 00:18:23.506 "raid_level": "concat", 00:18:23.506 "superblock": false, 00:18:23.506 "num_base_bdevs": 2, 00:18:23.506 "num_base_bdevs_discovered": 1, 00:18:23.506 "num_base_bdevs_operational": 1, 00:18:23.506 "base_bdevs_list": [ 00:18:23.506 { 00:18:23.506 "name": null, 00:18:23.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.506 "is_configured": false, 00:18:23.506 "data_offset": 0, 00:18:23.506 "data_size": 65536 00:18:23.506 }, 00:18:23.506 { 00:18:23.506 "name": "BaseBdev2", 00:18:23.506 "uuid": "928274d2-c2a3-4926-a428-a883154c0e74", 00:18:23.506 "is_configured": true, 00:18:23.506 "data_offset": 0, 00:18:23.506 "data_size": 65536 00:18:23.506 } 00:18:23.506 ] 00:18:23.506 }' 00:18:23.506 12:38:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:23.506 12:38:05 -- common/autotest_common.sh@10 -- # set +x 00:18:24.072 12:38:06 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:24.072 12:38:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:24.072 12:38:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.072 12:38:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:24.072 12:38:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:24.072 12:38:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:24.072 12:38:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:24.332 [2024-10-01 12:38:06.652678] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:24.332 [2024-10-01 12:38:06.652753] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:24.332 12:38:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:24.332 12:38:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:24.332 12:38:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.332 12:38:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:24.591 12:38:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:24.591 12:38:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:24.591 12:38:06 -- bdev/bdev_raid.sh@287 -- # killprocess 113801 00:18:24.591 12:38:06 -- common/autotest_common.sh@926 -- # '[' -z 113801 ']' 00:18:24.591 12:38:06 -- common/autotest_common.sh@930 -- # kill -0 113801 00:18:24.591 12:38:06 -- common/autotest_common.sh@931 -- # uname 00:18:24.591 12:38:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:24.591 12:38:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113801 00:18:24.591 12:38:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:24.591 12:38:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:24.591 12:38:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113801' 00:18:24.591 killing process with pid 113801 00:18:24.591 12:38:06 -- common/autotest_common.sh@945 -- # kill 113801 00:18:24.591 [2024-10-01 12:38:06.967623] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.591 [2024-10-01 12:38:06.967735] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.591 12:38:06 -- common/autotest_common.sh@950 -- # wait 113801 00:18:25.560 12:38:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:25.560 00:18:25.560 real 0m8.386s 00:18:25.560 user 0m13.964s 00:18:25.560 sys 0m1.316s 00:18:25.560 12:38:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.560 12:38:08 -- common/autotest_common.sh@10 -- # set +x 00:18:25.560 ************************************ 00:18:25.560 END TEST raid_state_function_test 00:18:25.560 ************************************ 00:18:25.560 12:38:08 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:18:25.560 12:38:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:25.560 12:38:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:25.560 12:38:08 -- common/autotest_common.sh@10 -- # set +x 00:18:25.830 ************************************ 00:18:25.830 START TEST raid_state_function_test_sb 00:18:25.830 ************************************ 00:18:25.830 12:38:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=114096 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114096' 00:18:25.830 Process raid pid: 114096 00:18:25.830 12:38:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114096 /var/tmp/spdk-raid.sock 00:18:25.830 12:38:08 -- common/autotest_common.sh@819 -- # '[' -z 114096 ']' 00:18:25.830 12:38:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:25.830 12:38:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:25.830 12:38:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:25.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:25.830 12:38:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:25.830 12:38:08 -- common/autotest_common.sh@10 -- # set +x 00:18:25.830 [2024-10-01 12:38:08.181007] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:25.830 [2024-10-01 12:38:08.181141] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.830 [2024-10-01 12:38:08.348005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.109 [2024-10-01 12:38:08.501934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.395 [2024-10-01 12:38:08.657562] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.654 12:38:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:26.654 12:38:08 -- common/autotest_common.sh@852 -- # return 0 00:18:26.654 12:38:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:26.654 [2024-10-01 12:38:09.169796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.654 [2024-10-01 12:38:09.169866] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.654 [2024-10-01 12:38:09.169876] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.654 [2024-10-01 12:38:09.169896] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.912 "name": "Existed_Raid", 00:18:26.912 "uuid": "2b5a2230-c7a0-4042-9222-eeb7cc371863", 00:18:26.912 "strip_size_kb": 64, 00:18:26.912 "state": "configuring", 00:18:26.912 "raid_level": "concat", 00:18:26.912 "superblock": true, 00:18:26.912 "num_base_bdevs": 2, 00:18:26.912 "num_base_bdevs_discovered": 0, 00:18:26.912 "num_base_bdevs_operational": 2, 00:18:26.912 "base_bdevs_list": [ 00:18:26.912 { 00:18:26.912 "name": "BaseBdev1", 00:18:26.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.912 "is_configured": false, 00:18:26.912 "data_offset": 0, 00:18:26.912 "data_size": 0 00:18:26.912 }, 00:18:26.912 { 00:18:26.912 "name": "BaseBdev2", 00:18:26.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.912 "is_configured": false, 00:18:26.912 "data_offset": 0, 00:18:26.912 "data_size": 0 00:18:26.912 } 00:18:26.912 ] 00:18:26.912 }' 00:18:26.912 12:38:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.912 12:38:09 -- common/autotest_common.sh@10 -- # set +x 00:18:27.480 12:38:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:27.738 [2024-10-01 12:38:10.096336] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.738 [2024-10-01 12:38:10.096384] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:27.738 12:38:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:27.997 [2024-10-01 12:38:10.284102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.997 [2024-10-01 12:38:10.284188] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.997 [2024-10-01 12:38:10.284198] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.997 [2024-10-01 12:38:10.284219] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.997 12:38:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:27.997 [2024-10-01 12:38:10.475390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.997 BaseBdev1 00:18:27.997 12:38:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:27.997 12:38:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:27.997 12:38:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:27.997 12:38:10 -- common/autotest_common.sh@889 -- # local i 00:18:27.997 12:38:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:27.997 12:38:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:27.997 12:38:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.255 12:38:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:28.515 [ 00:18:28.515 { 00:18:28.515 "name": "BaseBdev1", 00:18:28.515 "aliases": [ 00:18:28.515 "8cf057e9-b1e0-4486-a5d5-ab20e1e79712" 00:18:28.515 ], 00:18:28.515 "product_name": "Malloc disk", 00:18:28.515 "block_size": 512, 00:18:28.515 "num_blocks": 65536, 00:18:28.515 "uuid": "8cf057e9-b1e0-4486-a5d5-ab20e1e79712", 00:18:28.515 "assigned_rate_limits": { 00:18:28.515 "rw_ios_per_sec": 0, 00:18:28.515 "rw_mbytes_per_sec": 0, 00:18:28.515 "r_mbytes_per_sec": 0, 00:18:28.515 "w_mbytes_per_sec": 0 00:18:28.515 }, 00:18:28.515 "claimed": true, 00:18:28.515 "claim_type": "exclusive_write", 00:18:28.515 "zoned": false, 00:18:28.515 "supported_io_types": { 00:18:28.515 "read": true, 00:18:28.515 "write": true, 00:18:28.515 "unmap": true, 00:18:28.515 "write_zeroes": true, 00:18:28.515 "flush": true, 00:18:28.515 "reset": true, 00:18:28.515 "compare": false, 00:18:28.515 "compare_and_write": false, 00:18:28.515 "abort": true, 00:18:28.515 "nvme_admin": false, 00:18:28.515 "nvme_io": false 00:18:28.515 }, 00:18:28.515 "memory_domains": [ 00:18:28.515 { 00:18:28.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.515 "dma_device_type": 2 00:18:28.515 } 00:18:28.515 ], 00:18:28.515 "driver_specific": {} 00:18:28.515 } 00:18:28.515 ] 00:18:28.515 12:38:10 -- common/autotest_common.sh@895 -- # return 0 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.515 12:38:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.773 12:38:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.773 "name": "Existed_Raid", 00:18:28.773 "uuid": "ca093ddd-be93-452d-90aa-1932b04474b7", 00:18:28.773 "strip_size_kb": 64, 00:18:28.773 "state": "configuring", 00:18:28.773 "raid_level": "concat", 00:18:28.773 "superblock": true, 00:18:28.773 "num_base_bdevs": 2, 00:18:28.773 "num_base_bdevs_discovered": 1, 00:18:28.773 "num_base_bdevs_operational": 2, 00:18:28.774 "base_bdevs_list": [ 00:18:28.774 { 00:18:28.774 "name": "BaseBdev1", 00:18:28.774 "uuid": "8cf057e9-b1e0-4486-a5d5-ab20e1e79712", 00:18:28.774 "is_configured": true, 00:18:28.774 "data_offset": 2048, 00:18:28.774 "data_size": 63488 00:18:28.774 }, 00:18:28.774 { 00:18:28.774 "name": "BaseBdev2", 00:18:28.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.774 "is_configured": false, 00:18:28.774 "data_offset": 0, 00:18:28.774 "data_size": 0 00:18:28.774 } 00:18:28.774 ] 00:18:28.774 }' 00:18:28.774 12:38:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.774 12:38:11 -- common/autotest_common.sh@10 -- # set +x 00:18:29.032 12:38:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:29.291 [2024-10-01 12:38:11.729630] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:29.291 [2024-10-01 12:38:11.729677] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:29.291 12:38:11 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:29.291 12:38:11 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:29.550 12:38:12 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:29.809 BaseBdev1 00:18:29.809 12:38:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:29.809 12:38:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:29.809 12:38:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:29.809 12:38:12 -- common/autotest_common.sh@889 -- # local i 00:18:29.809 12:38:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:29.809 12:38:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:29.809 12:38:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:30.068 12:38:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:30.068 [ 00:18:30.068 { 00:18:30.068 "name": "BaseBdev1", 00:18:30.068 "aliases": [ 00:18:30.068 "0f1bc27e-e0cd-4ed1-88af-0db77ba261cd" 00:18:30.068 ], 00:18:30.068 "product_name": "Malloc disk", 00:18:30.068 "block_size": 512, 00:18:30.068 "num_blocks": 65536, 00:18:30.068 "uuid": "0f1bc27e-e0cd-4ed1-88af-0db77ba261cd", 00:18:30.068 "assigned_rate_limits": { 00:18:30.068 "rw_ios_per_sec": 0, 00:18:30.068 "rw_mbytes_per_sec": 0, 00:18:30.068 "r_mbytes_per_sec": 0, 00:18:30.068 "w_mbytes_per_sec": 0 00:18:30.068 }, 00:18:30.068 "claimed": false, 00:18:30.068 "zoned": false, 00:18:30.068 "supported_io_types": { 00:18:30.068 "read": true, 00:18:30.068 "write": true, 00:18:30.068 "unmap": true, 00:18:30.068 "write_zeroes": true, 00:18:30.068 "flush": true, 00:18:30.068 "reset": true, 00:18:30.068 "compare": false, 00:18:30.068 "compare_and_write": false, 00:18:30.068 "abort": true, 00:18:30.068 "nvme_admin": false, 00:18:30.068 "nvme_io": false 00:18:30.068 }, 00:18:30.068 "memory_domains": [ 00:18:30.068 { 00:18:30.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.068 "dma_device_type": 2 00:18:30.068 } 00:18:30.068 ], 00:18:30.068 "driver_specific": {} 00:18:30.068 } 00:18:30.068 ] 00:18:30.327 12:38:12 -- common/autotest_common.sh@895 -- # return 0 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:30.327 [2024-10-01 12:38:12.777568] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.327 [2024-10-01 12:38:12.779414] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.327 [2024-10-01 12:38:12.779497] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.327 12:38:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.584 12:38:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.584 "name": "Existed_Raid", 00:18:30.584 "uuid": "6f0df82c-c0ab-431a-8fdd-e3934a71ace6", 00:18:30.585 "strip_size_kb": 64, 00:18:30.585 "state": "configuring", 00:18:30.585 "raid_level": "concat", 00:18:30.585 "superblock": true, 00:18:30.585 "num_base_bdevs": 2, 00:18:30.585 "num_base_bdevs_discovered": 1, 00:18:30.585 "num_base_bdevs_operational": 2, 00:18:30.585 "base_bdevs_list": [ 00:18:30.585 { 00:18:30.585 "name": "BaseBdev1", 00:18:30.585 "uuid": "0f1bc27e-e0cd-4ed1-88af-0db77ba261cd", 00:18:30.585 "is_configured": true, 00:18:30.585 "data_offset": 2048, 00:18:30.585 "data_size": 63488 00:18:30.585 }, 00:18:30.585 { 00:18:30.585 "name": "BaseBdev2", 00:18:30.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.585 "is_configured": false, 00:18:30.585 "data_offset": 0, 00:18:30.585 "data_size": 0 00:18:30.585 } 00:18:30.585 ] 00:18:30.585 }' 00:18:30.585 12:38:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.585 12:38:12 -- common/autotest_common.sh@10 -- # set +x 00:18:31.151 12:38:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:31.411 [2024-10-01 12:38:13.719577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.411 [2024-10-01 12:38:13.719761] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:31.411 [2024-10-01 12:38:13.719773] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:31.411 [2024-10-01 12:38:13.719898] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:31.411 [2024-10-01 12:38:13.720193] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:31.411 [2024-10-01 12:38:13.720203] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:31.411 [2024-10-01 12:38:13.720343] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.411 BaseBdev2 00:18:31.411 12:38:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:31.411 12:38:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:31.411 12:38:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:31.411 12:38:13 -- common/autotest_common.sh@889 -- # local i 00:18:31.411 12:38:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:31.411 12:38:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:31.411 12:38:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.411 12:38:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:31.670 [ 00:18:31.670 { 00:18:31.670 "name": "BaseBdev2", 00:18:31.670 "aliases": [ 00:18:31.670 "47c95273-573d-4209-9f1c-034c498ef72e" 00:18:31.670 ], 00:18:31.670 "product_name": "Malloc disk", 00:18:31.670 "block_size": 512, 00:18:31.670 "num_blocks": 65536, 00:18:31.670 "uuid": "47c95273-573d-4209-9f1c-034c498ef72e", 00:18:31.670 "assigned_rate_limits": { 00:18:31.670 "rw_ios_per_sec": 0, 00:18:31.670 "rw_mbytes_per_sec": 0, 00:18:31.670 "r_mbytes_per_sec": 0, 00:18:31.670 "w_mbytes_per_sec": 0 00:18:31.670 }, 00:18:31.670 "claimed": true, 00:18:31.670 "claim_type": "exclusive_write", 00:18:31.670 "zoned": false, 00:18:31.670 "supported_io_types": { 00:18:31.670 "read": true, 00:18:31.670 "write": true, 00:18:31.670 "unmap": true, 00:18:31.670 "write_zeroes": true, 00:18:31.670 "flush": true, 00:18:31.670 "reset": true, 00:18:31.670 "compare": false, 00:18:31.670 "compare_and_write": false, 00:18:31.670 "abort": true, 00:18:31.670 "nvme_admin": false, 00:18:31.670 "nvme_io": false 00:18:31.670 }, 00:18:31.670 "memory_domains": [ 00:18:31.670 { 00:18:31.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.670 "dma_device_type": 2 00:18:31.670 } 00:18:31.670 ], 00:18:31.670 "driver_specific": {} 00:18:31.670 } 00:18:31.670 ] 00:18:31.670 12:38:14 -- common/autotest_common.sh@895 -- # return 0 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.670 12:38:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.929 12:38:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.929 "name": "Existed_Raid", 00:18:31.929 "uuid": "6f0df82c-c0ab-431a-8fdd-e3934a71ace6", 00:18:31.929 "strip_size_kb": 64, 00:18:31.929 "state": "online", 00:18:31.929 "raid_level": "concat", 00:18:31.929 "superblock": true, 00:18:31.929 "num_base_bdevs": 2, 00:18:31.929 "num_base_bdevs_discovered": 2, 00:18:31.929 "num_base_bdevs_operational": 2, 00:18:31.929 "base_bdevs_list": [ 00:18:31.929 { 00:18:31.929 "name": "BaseBdev1", 00:18:31.929 "uuid": "0f1bc27e-e0cd-4ed1-88af-0db77ba261cd", 00:18:31.929 "is_configured": true, 00:18:31.929 "data_offset": 2048, 00:18:31.929 "data_size": 63488 00:18:31.929 }, 00:18:31.929 { 00:18:31.929 "name": "BaseBdev2", 00:18:31.929 "uuid": "47c95273-573d-4209-9f1c-034c498ef72e", 00:18:31.929 "is_configured": true, 00:18:31.929 "data_offset": 2048, 00:18:31.929 "data_size": 63488 00:18:31.929 } 00:18:31.929 ] 00:18:31.929 }' 00:18:31.929 12:38:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.929 12:38:14 -- common/autotest_common.sh@10 -- # set +x 00:18:32.497 12:38:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:32.497 [2024-10-01 12:38:15.006062] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.498 [2024-10-01 12:38:15.006090] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.498 [2024-10-01 12:38:15.006130] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.756 12:38:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.015 12:38:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.015 "name": "Existed_Raid", 00:18:33.015 "uuid": "6f0df82c-c0ab-431a-8fdd-e3934a71ace6", 00:18:33.015 "strip_size_kb": 64, 00:18:33.015 "state": "offline", 00:18:33.015 "raid_level": "concat", 00:18:33.015 "superblock": true, 00:18:33.015 "num_base_bdevs": 2, 00:18:33.015 "num_base_bdevs_discovered": 1, 00:18:33.015 "num_base_bdevs_operational": 1, 00:18:33.015 "base_bdevs_list": [ 00:18:33.015 { 00:18:33.015 "name": null, 00:18:33.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.015 "is_configured": false, 00:18:33.015 "data_offset": 2048, 00:18:33.015 "data_size": 63488 00:18:33.015 }, 00:18:33.015 { 00:18:33.015 "name": "BaseBdev2", 00:18:33.015 "uuid": "47c95273-573d-4209-9f1c-034c498ef72e", 00:18:33.015 "is_configured": true, 00:18:33.015 "data_offset": 2048, 00:18:33.015 "data_size": 63488 00:18:33.015 } 00:18:33.015 ] 00:18:33.015 }' 00:18:33.015 12:38:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.015 12:38:15 -- common/autotest_common.sh@10 -- # set +x 00:18:33.581 12:38:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:33.581 12:38:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:33.581 12:38:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.581 12:38:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:33.581 12:38:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:33.581 12:38:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:33.581 12:38:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:33.839 [2024-10-01 12:38:16.199296] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:33.839 [2024-10-01 12:38:16.199358] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:33.839 12:38:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:33.839 12:38:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:33.839 12:38:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.839 12:38:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:34.102 12:38:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:34.102 12:38:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:34.102 12:38:16 -- bdev/bdev_raid.sh@287 -- # killprocess 114096 00:18:34.102 12:38:16 -- common/autotest_common.sh@926 -- # '[' -z 114096 ']' 00:18:34.102 12:38:16 -- common/autotest_common.sh@930 -- # kill -0 114096 00:18:34.102 12:38:16 -- common/autotest_common.sh@931 -- # uname 00:18:34.102 12:38:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:34.102 12:38:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114096 00:18:34.102 killing process with pid 114096 00:18:34.102 12:38:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:34.102 12:38:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:34.102 12:38:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114096' 00:18:34.102 12:38:16 -- common/autotest_common.sh@945 -- # kill 114096 00:18:34.102 [2024-10-01 12:38:16.517207] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.102 12:38:16 -- common/autotest_common.sh@950 -- # wait 114096 00:18:34.102 [2024-10-01 12:38:16.517325] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:35.480 00:18:35.480 real 0m9.467s 00:18:35.480 user 0m15.757s 00:18:35.480 sys 0m1.551s 00:18:35.480 12:38:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.480 12:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:35.480 ************************************ 00:18:35.480 END TEST raid_state_function_test_sb 00:18:35.480 ************************************ 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:18:35.480 12:38:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:35.480 12:38:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:35.480 12:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:35.480 ************************************ 00:18:35.480 START TEST raid_superblock_test 00:18:35.480 ************************************ 00:18:35.480 12:38:17 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@357 -- # raid_pid=114420 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:35.480 12:38:17 -- bdev/bdev_raid.sh@358 -- # waitforlisten 114420 /var/tmp/spdk-raid.sock 00:18:35.480 12:38:17 -- common/autotest_common.sh@819 -- # '[' -z 114420 ']' 00:18:35.480 12:38:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:35.480 12:38:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:35.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:35.480 12:38:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:35.480 12:38:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:35.480 12:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:35.480 [2024-10-01 12:38:17.731025] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:35.480 [2024-10-01 12:38:17.731175] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114420 ] 00:18:35.480 [2024-10-01 12:38:17.896155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.739 [2024-10-01 12:38:18.043823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.739 [2024-10-01 12:38:18.186701] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.307 12:38:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:36.307 12:38:18 -- common/autotest_common.sh@852 -- # return 0 00:18:36.307 12:38:18 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:36.307 12:38:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.307 12:38:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:36.307 12:38:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:36.307 12:38:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:36.307 12:38:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:36.307 12:38:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:36.307 12:38:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:36.307 12:38:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:36.307 malloc1 00:18:36.307 12:38:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:36.565 [2024-10-01 12:38:18.911543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:36.565 [2024-10-01 12:38:18.911626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.565 [2024-10-01 12:38:18.911651] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:36.565 [2024-10-01 12:38:18.911696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.565 [2024-10-01 12:38:18.913844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.565 [2024-10-01 12:38:18.913897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:36.565 pt1 00:18:36.565 12:38:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.565 12:38:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.565 12:38:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:36.565 12:38:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:36.565 12:38:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:36.565 12:38:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:36.565 12:38:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:36.565 12:38:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:36.565 12:38:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:36.824 malloc2 00:18:36.824 12:38:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.824 [2024-10-01 12:38:19.334917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.824 [2024-10-01 12:38:19.334992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.824 [2024-10-01 12:38:19.335028] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:36.824 [2024-10-01 12:38:19.335074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.824 [2024-10-01 12:38:19.337210] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.824 [2024-10-01 12:38:19.337253] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.824 pt2 00:18:36.824 12:38:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.824 12:38:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.824 12:38:19 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:18:37.082 [2024-10-01 12:38:19.530688] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.082 [2024-10-01 12:38:19.532445] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.082 [2024-10-01 12:38:19.532604] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:18:37.082 [2024-10-01 12:38:19.532614] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:37.082 [2024-10-01 12:38:19.532702] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:37.082 [2024-10-01 12:38:19.533009] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:18:37.082 [2024-10-01 12:38:19.533027] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:18:37.082 [2024-10-01 12:38:19.533155] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.082 12:38:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.340 12:38:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.340 "name": "raid_bdev1", 00:18:37.340 "uuid": "86f8a63c-37ef-41d7-bf6f-d20ef6a98f91", 00:18:37.340 "strip_size_kb": 64, 00:18:37.340 "state": "online", 00:18:37.340 "raid_level": "concat", 00:18:37.340 "superblock": true, 00:18:37.340 "num_base_bdevs": 2, 00:18:37.340 "num_base_bdevs_discovered": 2, 00:18:37.340 "num_base_bdevs_operational": 2, 00:18:37.340 "base_bdevs_list": [ 00:18:37.340 { 00:18:37.340 "name": "pt1", 00:18:37.340 "uuid": "15677fa6-ee99-5d57-9dc8-3a9d4a7df96a", 00:18:37.340 "is_configured": true, 00:18:37.340 "data_offset": 2048, 00:18:37.340 "data_size": 63488 00:18:37.340 }, 00:18:37.340 { 00:18:37.340 "name": "pt2", 00:18:37.340 "uuid": "7fb78402-bc91-5790-9f66-a8906cfd3b73", 00:18:37.340 "is_configured": true, 00:18:37.340 "data_offset": 2048, 00:18:37.340 "data_size": 63488 00:18:37.340 } 00:18:37.340 ] 00:18:37.340 }' 00:18:37.340 12:38:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.340 12:38:19 -- common/autotest_common.sh@10 -- # set +x 00:18:37.906 12:38:20 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:37.906 12:38:20 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:37.906 [2024-10-01 12:38:20.385642] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.906 12:38:20 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=86f8a63c-37ef-41d7-bf6f-d20ef6a98f91 00:18:37.906 12:38:20 -- bdev/bdev_raid.sh@380 -- # '[' -z 86f8a63c-37ef-41d7-bf6f-d20ef6a98f91 ']' 00:18:37.906 12:38:20 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.163 [2024-10-01 12:38:20.577196] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.163 [2024-10-01 12:38:20.577214] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.163 [2024-10-01 12:38:20.577277] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.163 [2024-10-01 12:38:20.577324] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.163 [2024-10-01 12:38:20.577333] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:18:38.163 12:38:20 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.163 12:38:20 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:38.434 12:38:20 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:38.434 12:38:20 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:38.434 12:38:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.434 12:38:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:38.434 12:38:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.434 12:38:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:38.730 12:38:21 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:38.730 12:38:21 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:38.988 12:38:21 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:38.988 12:38:21 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:18:38.988 12:38:21 -- common/autotest_common.sh@640 -- # local es=0 00:18:38.988 12:38:21 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:18:38.988 12:38:21 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:38.988 12:38:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:38.988 12:38:21 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:38.988 12:38:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:38.988 12:38:21 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:38.988 12:38:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:38.988 12:38:21 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:38.988 12:38:21 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:38.988 12:38:21 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:18:38.988 [2024-10-01 12:38:21.471958] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:38.988 [2024-10-01 12:38:21.473825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:38.988 [2024-10-01 12:38:21.473887] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:38.988 [2024-10-01 12:38:21.473942] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:38.988 [2024-10-01 12:38:21.473969] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.988 [2024-10-01 12:38:21.473978] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:18:38.988 request: 00:18:38.988 { 00:18:38.988 "name": "raid_bdev1", 00:18:38.988 "raid_level": "concat", 00:18:38.988 "base_bdevs": [ 00:18:38.988 "malloc1", 00:18:38.988 "malloc2" 00:18:38.988 ], 00:18:38.988 "superblock": false, 00:18:38.988 "strip_size_kb": 64, 00:18:38.988 "method": "bdev_raid_create", 00:18:38.988 "req_id": 1 00:18:38.988 } 00:18:38.988 Got JSON-RPC error response 00:18:38.988 response: 00:18:38.988 { 00:18:38.988 "code": -17, 00:18:38.988 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:38.988 } 00:18:38.988 12:38:21 -- common/autotest_common.sh@643 -- # es=1 00:18:38.988 12:38:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:38.988 12:38:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:38.988 12:38:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:38.988 12:38:21 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.988 12:38:21 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:39.246 12:38:21 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:39.246 12:38:21 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:39.246 12:38:21 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:39.505 [2024-10-01 12:38:21.855343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:39.505 [2024-10-01 12:38:21.855436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.505 [2024-10-01 12:38:21.855467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:18:39.505 [2024-10-01 12:38:21.855490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.505 [2024-10-01 12:38:21.857639] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.505 [2024-10-01 12:38:21.857691] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:39.505 [2024-10-01 12:38:21.857795] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:39.505 [2024-10-01 12:38:21.857854] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:39.505 pt1 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.505 12:38:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.764 12:38:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:39.764 "name": "raid_bdev1", 00:18:39.764 "uuid": "86f8a63c-37ef-41d7-bf6f-d20ef6a98f91", 00:18:39.764 "strip_size_kb": 64, 00:18:39.764 "state": "configuring", 00:18:39.764 "raid_level": "concat", 00:18:39.764 "superblock": true, 00:18:39.764 "num_base_bdevs": 2, 00:18:39.764 "num_base_bdevs_discovered": 1, 00:18:39.764 "num_base_bdevs_operational": 2, 00:18:39.764 "base_bdevs_list": [ 00:18:39.764 { 00:18:39.764 "name": "pt1", 00:18:39.764 "uuid": "15677fa6-ee99-5d57-9dc8-3a9d4a7df96a", 00:18:39.764 "is_configured": true, 00:18:39.764 "data_offset": 2048, 00:18:39.764 "data_size": 63488 00:18:39.764 }, 00:18:39.764 { 00:18:39.764 "name": null, 00:18:39.764 "uuid": "7fb78402-bc91-5790-9f66-a8906cfd3b73", 00:18:39.764 "is_configured": false, 00:18:39.764 "data_offset": 2048, 00:18:39.764 "data_size": 63488 00:18:39.764 } 00:18:39.764 ] 00:18:39.764 }' 00:18:39.764 12:38:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:39.764 12:38:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.022 12:38:22 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:18:40.022 12:38:22 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:40.022 12:38:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:40.022 12:38:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.280 [2024-10-01 12:38:22.722358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.280 [2024-10-01 12:38:22.722433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.280 [2024-10-01 12:38:22.722470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:40.280 [2024-10-01 12:38:22.722493] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.280 [2024-10-01 12:38:22.722890] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.280 [2024-10-01 12:38:22.722925] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.280 [2024-10-01 12:38:22.723010] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:40.280 [2024-10-01 12:38:22.723028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.280 [2024-10-01 12:38:22.723113] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:18:40.280 [2024-10-01 12:38:22.723121] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:40.280 [2024-10-01 12:38:22.723212] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:40.280 [2024-10-01 12:38:22.723462] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:18:40.280 [2024-10-01 12:38:22.723471] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:18:40.280 [2024-10-01 12:38:22.723584] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.280 pt2 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.280 12:38:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.538 12:38:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.539 "name": "raid_bdev1", 00:18:40.539 "uuid": "86f8a63c-37ef-41d7-bf6f-d20ef6a98f91", 00:18:40.539 "strip_size_kb": 64, 00:18:40.539 "state": "online", 00:18:40.539 "raid_level": "concat", 00:18:40.539 "superblock": true, 00:18:40.539 "num_base_bdevs": 2, 00:18:40.539 "num_base_bdevs_discovered": 2, 00:18:40.539 "num_base_bdevs_operational": 2, 00:18:40.539 "base_bdevs_list": [ 00:18:40.539 { 00:18:40.539 "name": "pt1", 00:18:40.539 "uuid": "15677fa6-ee99-5d57-9dc8-3a9d4a7df96a", 00:18:40.539 "is_configured": true, 00:18:40.539 "data_offset": 2048, 00:18:40.539 "data_size": 63488 00:18:40.539 }, 00:18:40.539 { 00:18:40.539 "name": "pt2", 00:18:40.539 "uuid": "7fb78402-bc91-5790-9f66-a8906cfd3b73", 00:18:40.539 "is_configured": true, 00:18:40.539 "data_offset": 2048, 00:18:40.539 "data_size": 63488 00:18:40.539 } 00:18:40.539 ] 00:18:40.539 }' 00:18:40.539 12:38:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.539 12:38:22 -- common/autotest_common.sh@10 -- # set +x 00:18:41.106 12:38:23 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:41.106 12:38:23 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:41.106 [2024-10-01 12:38:23.565316] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.106 12:38:23 -- bdev/bdev_raid.sh@430 -- # '[' 86f8a63c-37ef-41d7-bf6f-d20ef6a98f91 '!=' 86f8a63c-37ef-41d7-bf6f-d20ef6a98f91 ']' 00:18:41.106 12:38:23 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:41.106 12:38:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:41.106 12:38:23 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:41.106 12:38:23 -- bdev/bdev_raid.sh@511 -- # killprocess 114420 00:18:41.106 12:38:23 -- common/autotest_common.sh@926 -- # '[' -z 114420 ']' 00:18:41.106 12:38:23 -- common/autotest_common.sh@930 -- # kill -0 114420 00:18:41.106 12:38:23 -- common/autotest_common.sh@931 -- # uname 00:18:41.106 12:38:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:41.106 12:38:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114420 00:18:41.106 12:38:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:41.106 killing process with pid 114420 00:18:41.106 12:38:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:41.106 12:38:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114420' 00:18:41.106 12:38:23 -- common/autotest_common.sh@945 -- # kill 114420 00:18:41.106 [2024-10-01 12:38:23.606102] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.106 [2024-10-01 12:38:23.606161] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.106 [2024-10-01 12:38:23.606201] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.106 [2024-10-01 12:38:23.606209] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:18:41.106 12:38:23 -- common/autotest_common.sh@950 -- # wait 114420 00:18:41.365 [2024-10-01 12:38:23.763163] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.300 ************************************ 00:18:42.300 END TEST raid_superblock_test 00:18:42.300 ************************************ 00:18:42.300 12:38:24 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:42.300 00:18:42.300 real 0m7.146s 00:18:42.300 user 0m11.535s 00:18:42.300 sys 0m1.217s 00:18:42.300 12:38:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.300 12:38:24 -- common/autotest_common.sh@10 -- # set +x 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:18:42.560 12:38:24 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:42.560 12:38:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:42.560 12:38:24 -- common/autotest_common.sh@10 -- # set +x 00:18:42.560 ************************************ 00:18:42.560 START TEST raid_state_function_test 00:18:42.560 ************************************ 00:18:42.560 12:38:24 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=114654 00:18:42.560 Process raid pid: 114654 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114654' 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:42.560 12:38:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114654 /var/tmp/spdk-raid.sock 00:18:42.560 12:38:24 -- common/autotest_common.sh@819 -- # '[' -z 114654 ']' 00:18:42.560 12:38:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:42.560 12:38:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:42.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:42.560 12:38:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:42.560 12:38:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:42.560 12:38:24 -- common/autotest_common.sh@10 -- # set +x 00:18:42.560 [2024-10-01 12:38:24.961696] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:42.560 [2024-10-01 12:38:24.961840] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.819 [2024-10-01 12:38:25.125201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.819 [2024-10-01 12:38:25.269677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.078 [2024-10-01 12:38:25.422576] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.337 12:38:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:43.337 12:38:25 -- common/autotest_common.sh@852 -- # return 0 00:18:43.337 12:38:25 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:43.595 [2024-10-01 12:38:25.922965] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:43.595 [2024-10-01 12:38:25.923033] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:43.595 [2024-10-01 12:38:25.923042] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:43.595 [2024-10-01 12:38:25.923075] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:43.595 12:38:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:43.595 12:38:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:43.595 12:38:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:43.595 12:38:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:43.595 12:38:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:43.595 12:38:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:43.595 12:38:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.596 12:38:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.596 12:38:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.596 12:38:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.596 12:38:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.596 12:38:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.596 12:38:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.596 "name": "Existed_Raid", 00:18:43.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.596 "strip_size_kb": 0, 00:18:43.596 "state": "configuring", 00:18:43.596 "raid_level": "raid1", 00:18:43.596 "superblock": false, 00:18:43.596 "num_base_bdevs": 2, 00:18:43.596 "num_base_bdevs_discovered": 0, 00:18:43.596 "num_base_bdevs_operational": 2, 00:18:43.596 "base_bdevs_list": [ 00:18:43.596 { 00:18:43.596 "name": "BaseBdev1", 00:18:43.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.596 "is_configured": false, 00:18:43.596 "data_offset": 0, 00:18:43.596 "data_size": 0 00:18:43.596 }, 00:18:43.596 { 00:18:43.596 "name": "BaseBdev2", 00:18:43.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.596 "is_configured": false, 00:18:43.596 "data_offset": 0, 00:18:43.596 "data_size": 0 00:18:43.596 } 00:18:43.596 ] 00:18:43.596 }' 00:18:43.596 12:38:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.596 12:38:26 -- common/autotest_common.sh@10 -- # set +x 00:18:44.162 12:38:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:44.422 [2024-10-01 12:38:26.773706] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.422 [2024-10-01 12:38:26.773746] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:44.422 12:38:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:44.422 [2024-10-01 12:38:26.929470] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:44.422 [2024-10-01 12:38:26.929538] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:44.422 [2024-10-01 12:38:26.929546] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.422 [2024-10-01 12:38:26.929583] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.422 12:38:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:44.681 [2024-10-01 12:38:27.111729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.681 BaseBdev1 00:18:44.681 12:38:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:44.681 12:38:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:44.681 12:38:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:44.681 12:38:27 -- common/autotest_common.sh@889 -- # local i 00:18:44.681 12:38:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:44.681 12:38:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:44.681 12:38:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:44.940 12:38:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:44.940 [ 00:18:44.940 { 00:18:44.940 "name": "BaseBdev1", 00:18:44.940 "aliases": [ 00:18:44.940 "a7d491a3-77f1-4b7b-9ccf-14314a9a2dab" 00:18:44.940 ], 00:18:44.940 "product_name": "Malloc disk", 00:18:44.940 "block_size": 512, 00:18:44.940 "num_blocks": 65536, 00:18:44.940 "uuid": "a7d491a3-77f1-4b7b-9ccf-14314a9a2dab", 00:18:44.940 "assigned_rate_limits": { 00:18:44.940 "rw_ios_per_sec": 0, 00:18:44.940 "rw_mbytes_per_sec": 0, 00:18:44.940 "r_mbytes_per_sec": 0, 00:18:44.940 "w_mbytes_per_sec": 0 00:18:44.940 }, 00:18:44.940 "claimed": true, 00:18:44.940 "claim_type": "exclusive_write", 00:18:44.940 "zoned": false, 00:18:44.940 "supported_io_types": { 00:18:44.940 "read": true, 00:18:44.940 "write": true, 00:18:44.940 "unmap": true, 00:18:44.940 "write_zeroes": true, 00:18:44.940 "flush": true, 00:18:44.940 "reset": true, 00:18:44.940 "compare": false, 00:18:44.940 "compare_and_write": false, 00:18:44.940 "abort": true, 00:18:44.940 "nvme_admin": false, 00:18:44.940 "nvme_io": false 00:18:44.940 }, 00:18:44.940 "memory_domains": [ 00:18:44.940 { 00:18:44.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.940 "dma_device_type": 2 00:18:44.940 } 00:18:44.940 ], 00:18:44.940 "driver_specific": {} 00:18:44.940 } 00:18:44.940 ] 00:18:45.199 12:38:27 -- common/autotest_common.sh@895 -- # return 0 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.199 "name": "Existed_Raid", 00:18:45.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.199 "strip_size_kb": 0, 00:18:45.199 "state": "configuring", 00:18:45.199 "raid_level": "raid1", 00:18:45.199 "superblock": false, 00:18:45.199 "num_base_bdevs": 2, 00:18:45.199 "num_base_bdevs_discovered": 1, 00:18:45.199 "num_base_bdevs_operational": 2, 00:18:45.199 "base_bdevs_list": [ 00:18:45.199 { 00:18:45.199 "name": "BaseBdev1", 00:18:45.199 "uuid": "a7d491a3-77f1-4b7b-9ccf-14314a9a2dab", 00:18:45.199 "is_configured": true, 00:18:45.199 "data_offset": 0, 00:18:45.199 "data_size": 65536 00:18:45.199 }, 00:18:45.199 { 00:18:45.199 "name": "BaseBdev2", 00:18:45.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.199 "is_configured": false, 00:18:45.199 "data_offset": 0, 00:18:45.199 "data_size": 0 00:18:45.199 } 00:18:45.199 ] 00:18:45.199 }' 00:18:45.199 12:38:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.199 12:38:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.767 12:38:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:46.026 [2024-10-01 12:38:28.322642] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:46.026 [2024-10-01 12:38:28.322688] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:46.026 [2024-10-01 12:38:28.498395] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:46.026 [2024-10-01 12:38:28.500322] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:46.026 [2024-10-01 12:38:28.500390] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.026 12:38:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.285 12:38:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.285 "name": "Existed_Raid", 00:18:46.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.285 "strip_size_kb": 0, 00:18:46.285 "state": "configuring", 00:18:46.285 "raid_level": "raid1", 00:18:46.285 "superblock": false, 00:18:46.285 "num_base_bdevs": 2, 00:18:46.285 "num_base_bdevs_discovered": 1, 00:18:46.285 "num_base_bdevs_operational": 2, 00:18:46.285 "base_bdevs_list": [ 00:18:46.285 { 00:18:46.285 "name": "BaseBdev1", 00:18:46.285 "uuid": "a7d491a3-77f1-4b7b-9ccf-14314a9a2dab", 00:18:46.285 "is_configured": true, 00:18:46.285 "data_offset": 0, 00:18:46.285 "data_size": 65536 00:18:46.285 }, 00:18:46.285 { 00:18:46.285 "name": "BaseBdev2", 00:18:46.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.285 "is_configured": false, 00:18:46.285 "data_offset": 0, 00:18:46.285 "data_size": 0 00:18:46.285 } 00:18:46.285 ] 00:18:46.285 }' 00:18:46.285 12:38:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.285 12:38:28 -- common/autotest_common.sh@10 -- # set +x 00:18:46.852 12:38:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:47.111 [2024-10-01 12:38:29.415831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.111 [2024-10-01 12:38:29.415892] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:18:47.111 [2024-10-01 12:38:29.415901] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:47.111 [2024-10-01 12:38:29.416007] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:18:47.111 [2024-10-01 12:38:29.416285] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:18:47.111 [2024-10-01 12:38:29.416294] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:18:47.111 [2024-10-01 12:38:29.416549] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.111 BaseBdev2 00:18:47.111 12:38:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:47.111 12:38:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:47.111 12:38:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:47.111 12:38:29 -- common/autotest_common.sh@889 -- # local i 00:18:47.111 12:38:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:47.111 12:38:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:47.111 12:38:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.111 12:38:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:47.370 [ 00:18:47.370 { 00:18:47.370 "name": "BaseBdev2", 00:18:47.370 "aliases": [ 00:18:47.370 "fe029c8b-46d7-4c09-8744-a2f0380845b7" 00:18:47.370 ], 00:18:47.370 "product_name": "Malloc disk", 00:18:47.370 "block_size": 512, 00:18:47.370 "num_blocks": 65536, 00:18:47.370 "uuid": "fe029c8b-46d7-4c09-8744-a2f0380845b7", 00:18:47.370 "assigned_rate_limits": { 00:18:47.370 "rw_ios_per_sec": 0, 00:18:47.370 "rw_mbytes_per_sec": 0, 00:18:47.370 "r_mbytes_per_sec": 0, 00:18:47.370 "w_mbytes_per_sec": 0 00:18:47.370 }, 00:18:47.370 "claimed": true, 00:18:47.370 "claim_type": "exclusive_write", 00:18:47.370 "zoned": false, 00:18:47.370 "supported_io_types": { 00:18:47.370 "read": true, 00:18:47.370 "write": true, 00:18:47.370 "unmap": true, 00:18:47.370 "write_zeroes": true, 00:18:47.370 "flush": true, 00:18:47.370 "reset": true, 00:18:47.370 "compare": false, 00:18:47.370 "compare_and_write": false, 00:18:47.370 "abort": true, 00:18:47.370 "nvme_admin": false, 00:18:47.370 "nvme_io": false 00:18:47.370 }, 00:18:47.370 "memory_domains": [ 00:18:47.370 { 00:18:47.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.370 "dma_device_type": 2 00:18:47.370 } 00:18:47.370 ], 00:18:47.370 "driver_specific": {} 00:18:47.370 } 00:18:47.370 ] 00:18:47.370 12:38:29 -- common/autotest_common.sh@895 -- # return 0 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.370 12:38:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.629 12:38:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.629 "name": "Existed_Raid", 00:18:47.629 "uuid": "5c37982a-e332-4e2b-92fd-14d637d7aa5c", 00:18:47.629 "strip_size_kb": 0, 00:18:47.629 "state": "online", 00:18:47.629 "raid_level": "raid1", 00:18:47.629 "superblock": false, 00:18:47.629 "num_base_bdevs": 2, 00:18:47.629 "num_base_bdevs_discovered": 2, 00:18:47.629 "num_base_bdevs_operational": 2, 00:18:47.629 "base_bdevs_list": [ 00:18:47.629 { 00:18:47.629 "name": "BaseBdev1", 00:18:47.629 "uuid": "a7d491a3-77f1-4b7b-9ccf-14314a9a2dab", 00:18:47.629 "is_configured": true, 00:18:47.629 "data_offset": 0, 00:18:47.629 "data_size": 65536 00:18:47.629 }, 00:18:47.629 { 00:18:47.629 "name": "BaseBdev2", 00:18:47.629 "uuid": "fe029c8b-46d7-4c09-8744-a2f0380845b7", 00:18:47.629 "is_configured": true, 00:18:47.629 "data_offset": 0, 00:18:47.629 "data_size": 65536 00:18:47.629 } 00:18:47.629 ] 00:18:47.630 }' 00:18:47.630 12:38:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.630 12:38:29 -- common/autotest_common.sh@10 -- # set +x 00:18:48.197 12:38:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:48.197 [2024-10-01 12:38:30.690345] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.456 12:38:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.456 "name": "Existed_Raid", 00:18:48.456 "uuid": "5c37982a-e332-4e2b-92fd-14d637d7aa5c", 00:18:48.456 "strip_size_kb": 0, 00:18:48.456 "state": "online", 00:18:48.456 "raid_level": "raid1", 00:18:48.456 "superblock": false, 00:18:48.456 "num_base_bdevs": 2, 00:18:48.456 "num_base_bdevs_discovered": 1, 00:18:48.456 "num_base_bdevs_operational": 1, 00:18:48.456 "base_bdevs_list": [ 00:18:48.456 { 00:18:48.456 "name": null, 00:18:48.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.456 "is_configured": false, 00:18:48.456 "data_offset": 0, 00:18:48.457 "data_size": 65536 00:18:48.457 }, 00:18:48.457 { 00:18:48.457 "name": "BaseBdev2", 00:18:48.457 "uuid": "fe029c8b-46d7-4c09-8744-a2f0380845b7", 00:18:48.457 "is_configured": true, 00:18:48.457 "data_offset": 0, 00:18:48.457 "data_size": 65536 00:18:48.457 } 00:18:48.457 ] 00:18:48.457 }' 00:18:48.457 12:38:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.457 12:38:30 -- common/autotest_common.sh@10 -- # set +x 00:18:49.024 12:38:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:49.024 12:38:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:49.024 12:38:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.024 12:38:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:49.282 12:38:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:49.282 12:38:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:49.282 12:38:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:49.282 [2024-10-01 12:38:31.803515] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:49.282 [2024-10-01 12:38:31.803542] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.282 [2024-10-01 12:38:31.803610] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.541 [2024-10-01 12:38:31.883504] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.541 [2024-10-01 12:38:31.883539] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:49.541 12:38:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:49.541 12:38:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:49.541 12:38:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.541 12:38:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:49.799 12:38:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:49.799 12:38:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:49.799 12:38:32 -- bdev/bdev_raid.sh@287 -- # killprocess 114654 00:18:49.799 12:38:32 -- common/autotest_common.sh@926 -- # '[' -z 114654 ']' 00:18:49.799 12:38:32 -- common/autotest_common.sh@930 -- # kill -0 114654 00:18:49.799 12:38:32 -- common/autotest_common.sh@931 -- # uname 00:18:49.799 12:38:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:49.799 12:38:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114654 00:18:49.799 12:38:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:49.799 12:38:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:49.799 killing process with pid 114654 00:18:49.799 12:38:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114654' 00:18:49.799 12:38:32 -- common/autotest_common.sh@945 -- # kill 114654 00:18:49.799 [2024-10-01 12:38:32.113648] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:49.799 12:38:32 -- common/autotest_common.sh@950 -- # wait 114654 00:18:49.799 [2024-10-01 12:38:32.113777] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:50.736 00:18:50.736 real 0m8.278s 00:18:50.736 user 0m13.805s 00:18:50.736 sys 0m1.248s 00:18:50.736 12:38:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.736 12:38:33 -- common/autotest_common.sh@10 -- # set +x 00:18:50.736 ************************************ 00:18:50.736 END TEST raid_state_function_test 00:18:50.736 ************************************ 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:18:50.736 12:38:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:50.736 12:38:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:50.736 12:38:33 -- common/autotest_common.sh@10 -- # set +x 00:18:50.736 ************************************ 00:18:50.736 START TEST raid_state_function_test_sb 00:18:50.736 ************************************ 00:18:50.736 12:38:33 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=114952 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114952' 00:18:50.736 Process raid pid: 114952 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:50.736 12:38:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114952 /var/tmp/spdk-raid.sock 00:18:50.736 12:38:33 -- common/autotest_common.sh@819 -- # '[' -z 114952 ']' 00:18:50.736 12:38:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:50.736 12:38:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:50.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:50.736 12:38:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:50.736 12:38:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:50.737 12:38:33 -- common/autotest_common.sh@10 -- # set +x 00:18:50.996 [2024-10-01 12:38:33.334018] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:50.996 [2024-10-01 12:38:33.334184] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.996 [2024-10-01 12:38:33.506221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.255 [2024-10-01 12:38:33.654392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.514 [2024-10-01 12:38:33.801391] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.772 12:38:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:51.772 12:38:34 -- common/autotest_common.sh@852 -- # return 0 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:51.772 [2024-10-01 12:38:34.280689] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:51.772 [2024-10-01 12:38:34.280782] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:51.772 [2024-10-01 12:38:34.280792] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:51.772 [2024-10-01 12:38:34.280808] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.772 12:38:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.062 12:38:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.062 "name": "Existed_Raid", 00:18:52.062 "uuid": "1bd99cf2-05f5-4933-ad32-342b8f59016d", 00:18:52.062 "strip_size_kb": 0, 00:18:52.062 "state": "configuring", 00:18:52.062 "raid_level": "raid1", 00:18:52.062 "superblock": true, 00:18:52.062 "num_base_bdevs": 2, 00:18:52.062 "num_base_bdevs_discovered": 0, 00:18:52.062 "num_base_bdevs_operational": 2, 00:18:52.062 "base_bdevs_list": [ 00:18:52.062 { 00:18:52.062 "name": "BaseBdev1", 00:18:52.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.062 "is_configured": false, 00:18:52.062 "data_offset": 0, 00:18:52.062 "data_size": 0 00:18:52.062 }, 00:18:52.062 { 00:18:52.062 "name": "BaseBdev2", 00:18:52.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.062 "is_configured": false, 00:18:52.062 "data_offset": 0, 00:18:52.062 "data_size": 0 00:18:52.062 } 00:18:52.062 ] 00:18:52.062 }' 00:18:52.062 12:38:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.062 12:38:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.629 12:38:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:52.888 [2024-10-01 12:38:35.171306] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:52.888 [2024-10-01 12:38:35.171354] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:52.888 12:38:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:52.888 [2024-10-01 12:38:35.347102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:52.888 [2024-10-01 12:38:35.347174] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:52.888 [2024-10-01 12:38:35.347184] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:52.888 [2024-10-01 12:38:35.347205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:52.888 12:38:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:53.146 [2024-10-01 12:38:35.554041] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.146 BaseBdev1 00:18:53.146 12:38:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:53.146 12:38:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:53.146 12:38:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:53.146 12:38:35 -- common/autotest_common.sh@889 -- # local i 00:18:53.146 12:38:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:53.146 12:38:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:53.146 12:38:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:53.406 12:38:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:53.406 [ 00:18:53.406 { 00:18:53.406 "name": "BaseBdev1", 00:18:53.406 "aliases": [ 00:18:53.406 "22977213-985f-41d5-b92c-ed8bb33d899b" 00:18:53.406 ], 00:18:53.406 "product_name": "Malloc disk", 00:18:53.406 "block_size": 512, 00:18:53.406 "num_blocks": 65536, 00:18:53.406 "uuid": "22977213-985f-41d5-b92c-ed8bb33d899b", 00:18:53.406 "assigned_rate_limits": { 00:18:53.406 "rw_ios_per_sec": 0, 00:18:53.406 "rw_mbytes_per_sec": 0, 00:18:53.406 "r_mbytes_per_sec": 0, 00:18:53.406 "w_mbytes_per_sec": 0 00:18:53.406 }, 00:18:53.406 "claimed": true, 00:18:53.406 "claim_type": "exclusive_write", 00:18:53.406 "zoned": false, 00:18:53.406 "supported_io_types": { 00:18:53.406 "read": true, 00:18:53.406 "write": true, 00:18:53.406 "unmap": true, 00:18:53.406 "write_zeroes": true, 00:18:53.406 "flush": true, 00:18:53.406 "reset": true, 00:18:53.406 "compare": false, 00:18:53.406 "compare_and_write": false, 00:18:53.406 "abort": true, 00:18:53.406 "nvme_admin": false, 00:18:53.406 "nvme_io": false 00:18:53.406 }, 00:18:53.406 "memory_domains": [ 00:18:53.406 { 00:18:53.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.406 "dma_device_type": 2 00:18:53.406 } 00:18:53.406 ], 00:18:53.406 "driver_specific": {} 00:18:53.406 } 00:18:53.406 ] 00:18:53.406 12:38:35 -- common/autotest_common.sh@895 -- # return 0 00:18:53.406 12:38:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:53.406 12:38:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:53.406 12:38:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:53.407 12:38:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:53.407 12:38:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:53.407 12:38:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:53.407 12:38:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.407 12:38:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.407 12:38:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.407 12:38:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.407 12:38:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.407 12:38:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.665 12:38:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.665 "name": "Existed_Raid", 00:18:53.665 "uuid": "8e66be53-5b1b-4e2f-8ece-27bb5d18a294", 00:18:53.665 "strip_size_kb": 0, 00:18:53.665 "state": "configuring", 00:18:53.665 "raid_level": "raid1", 00:18:53.665 "superblock": true, 00:18:53.665 "num_base_bdevs": 2, 00:18:53.665 "num_base_bdevs_discovered": 1, 00:18:53.665 "num_base_bdevs_operational": 2, 00:18:53.665 "base_bdevs_list": [ 00:18:53.666 { 00:18:53.666 "name": "BaseBdev1", 00:18:53.666 "uuid": "22977213-985f-41d5-b92c-ed8bb33d899b", 00:18:53.666 "is_configured": true, 00:18:53.666 "data_offset": 2048, 00:18:53.666 "data_size": 63488 00:18:53.666 }, 00:18:53.666 { 00:18:53.666 "name": "BaseBdev2", 00:18:53.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.666 "is_configured": false, 00:18:53.666 "data_offset": 0, 00:18:53.666 "data_size": 0 00:18:53.666 } 00:18:53.666 ] 00:18:53.666 }' 00:18:53.666 12:38:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.666 12:38:36 -- common/autotest_common.sh@10 -- # set +x 00:18:54.233 12:38:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:54.233 [2024-10-01 12:38:36.764311] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:54.233 [2024-10-01 12:38:36.764484] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:54.492 12:38:36 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:54.492 12:38:36 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:54.751 12:38:37 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:54.751 BaseBdev1 00:18:54.751 12:38:37 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:54.751 12:38:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:54.751 12:38:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:54.751 12:38:37 -- common/autotest_common.sh@889 -- # local i 00:18:54.751 12:38:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:54.751 12:38:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:54.751 12:38:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:55.010 12:38:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:55.269 [ 00:18:55.269 { 00:18:55.269 "name": "BaseBdev1", 00:18:55.269 "aliases": [ 00:18:55.269 "2b10d154-b266-4c5c-97e0-63d65db24f22" 00:18:55.269 ], 00:18:55.269 "product_name": "Malloc disk", 00:18:55.269 "block_size": 512, 00:18:55.269 "num_blocks": 65536, 00:18:55.269 "uuid": "2b10d154-b266-4c5c-97e0-63d65db24f22", 00:18:55.269 "assigned_rate_limits": { 00:18:55.269 "rw_ios_per_sec": 0, 00:18:55.269 "rw_mbytes_per_sec": 0, 00:18:55.269 "r_mbytes_per_sec": 0, 00:18:55.269 "w_mbytes_per_sec": 0 00:18:55.269 }, 00:18:55.269 "claimed": false, 00:18:55.269 "zoned": false, 00:18:55.269 "supported_io_types": { 00:18:55.269 "read": true, 00:18:55.269 "write": true, 00:18:55.269 "unmap": true, 00:18:55.269 "write_zeroes": true, 00:18:55.269 "flush": true, 00:18:55.269 "reset": true, 00:18:55.269 "compare": false, 00:18:55.269 "compare_and_write": false, 00:18:55.269 "abort": true, 00:18:55.269 "nvme_admin": false, 00:18:55.269 "nvme_io": false 00:18:55.269 }, 00:18:55.269 "memory_domains": [ 00:18:55.269 { 00:18:55.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.269 "dma_device_type": 2 00:18:55.269 } 00:18:55.269 ], 00:18:55.269 "driver_specific": {} 00:18:55.269 } 00:18:55.269 ] 00:18:55.269 12:38:37 -- common/autotest_common.sh@895 -- # return 0 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:55.269 [2024-10-01 12:38:37.751567] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.269 [2024-10-01 12:38:37.753544] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.269 [2024-10-01 12:38:37.753710] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.269 12:38:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.528 12:38:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.528 "name": "Existed_Raid", 00:18:55.528 "uuid": "e514fef7-6abb-457a-a6b5-bf29460ee31d", 00:18:55.528 "strip_size_kb": 0, 00:18:55.528 "state": "configuring", 00:18:55.528 "raid_level": "raid1", 00:18:55.528 "superblock": true, 00:18:55.528 "num_base_bdevs": 2, 00:18:55.528 "num_base_bdevs_discovered": 1, 00:18:55.528 "num_base_bdevs_operational": 2, 00:18:55.528 "base_bdevs_list": [ 00:18:55.528 { 00:18:55.528 "name": "BaseBdev1", 00:18:55.528 "uuid": "2b10d154-b266-4c5c-97e0-63d65db24f22", 00:18:55.528 "is_configured": true, 00:18:55.528 "data_offset": 2048, 00:18:55.528 "data_size": 63488 00:18:55.528 }, 00:18:55.528 { 00:18:55.528 "name": "BaseBdev2", 00:18:55.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.528 "is_configured": false, 00:18:55.528 "data_offset": 0, 00:18:55.528 "data_size": 0 00:18:55.528 } 00:18:55.528 ] 00:18:55.528 }' 00:18:55.528 12:38:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.528 12:38:37 -- common/autotest_common.sh@10 -- # set +x 00:18:56.095 12:38:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:56.354 [2024-10-01 12:38:38.686153] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:56.354 [2024-10-01 12:38:38.686573] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:56.354 [2024-10-01 12:38:38.686695] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:56.354 [2024-10-01 12:38:38.686845] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:56.354 [2024-10-01 12:38:38.687238] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:56.354 [2024-10-01 12:38:38.687344] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:56.354 [2024-10-01 12:38:38.687563] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.354 BaseBdev2 00:18:56.354 12:38:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:56.354 12:38:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:56.354 12:38:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:56.354 12:38:38 -- common/autotest_common.sh@889 -- # local i 00:18:56.354 12:38:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:56.354 12:38:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:56.354 12:38:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:56.613 12:38:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:56.613 [ 00:18:56.613 { 00:18:56.613 "name": "BaseBdev2", 00:18:56.613 "aliases": [ 00:18:56.613 "33e94e7a-7a7a-43f6-8f99-6b9b1f667c10" 00:18:56.613 ], 00:18:56.613 "product_name": "Malloc disk", 00:18:56.613 "block_size": 512, 00:18:56.613 "num_blocks": 65536, 00:18:56.613 "uuid": "33e94e7a-7a7a-43f6-8f99-6b9b1f667c10", 00:18:56.613 "assigned_rate_limits": { 00:18:56.613 "rw_ios_per_sec": 0, 00:18:56.613 "rw_mbytes_per_sec": 0, 00:18:56.613 "r_mbytes_per_sec": 0, 00:18:56.613 "w_mbytes_per_sec": 0 00:18:56.613 }, 00:18:56.613 "claimed": true, 00:18:56.613 "claim_type": "exclusive_write", 00:18:56.613 "zoned": false, 00:18:56.613 "supported_io_types": { 00:18:56.613 "read": true, 00:18:56.613 "write": true, 00:18:56.613 "unmap": true, 00:18:56.613 "write_zeroes": true, 00:18:56.613 "flush": true, 00:18:56.613 "reset": true, 00:18:56.613 "compare": false, 00:18:56.613 "compare_and_write": false, 00:18:56.613 "abort": true, 00:18:56.613 "nvme_admin": false, 00:18:56.613 "nvme_io": false 00:18:56.613 }, 00:18:56.613 "memory_domains": [ 00:18:56.613 { 00:18:56.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.613 "dma_device_type": 2 00:18:56.613 } 00:18:56.613 ], 00:18:56.613 "driver_specific": {} 00:18:56.613 } 00:18:56.613 ] 00:18:56.613 12:38:39 -- common/autotest_common.sh@895 -- # return 0 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.613 12:38:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.872 12:38:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.872 "name": "Existed_Raid", 00:18:56.872 "uuid": "e514fef7-6abb-457a-a6b5-bf29460ee31d", 00:18:56.872 "strip_size_kb": 0, 00:18:56.872 "state": "online", 00:18:56.872 "raid_level": "raid1", 00:18:56.872 "superblock": true, 00:18:56.872 "num_base_bdevs": 2, 00:18:56.872 "num_base_bdevs_discovered": 2, 00:18:56.872 "num_base_bdevs_operational": 2, 00:18:56.872 "base_bdevs_list": [ 00:18:56.872 { 00:18:56.872 "name": "BaseBdev1", 00:18:56.872 "uuid": "2b10d154-b266-4c5c-97e0-63d65db24f22", 00:18:56.872 "is_configured": true, 00:18:56.872 "data_offset": 2048, 00:18:56.872 "data_size": 63488 00:18:56.872 }, 00:18:56.872 { 00:18:56.872 "name": "BaseBdev2", 00:18:56.872 "uuid": "33e94e7a-7a7a-43f6-8f99-6b9b1f667c10", 00:18:56.872 "is_configured": true, 00:18:56.872 "data_offset": 2048, 00:18:56.872 "data_size": 63488 00:18:56.872 } 00:18:56.872 ] 00:18:56.872 }' 00:18:56.872 12:38:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.872 12:38:39 -- common/autotest_common.sh@10 -- # set +x 00:18:57.439 12:38:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:57.439 [2024-10-01 12:38:39.924453] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:57.698 "name": "Existed_Raid", 00:18:57.698 "uuid": "e514fef7-6abb-457a-a6b5-bf29460ee31d", 00:18:57.698 "strip_size_kb": 0, 00:18:57.698 "state": "online", 00:18:57.698 "raid_level": "raid1", 00:18:57.698 "superblock": true, 00:18:57.698 "num_base_bdevs": 2, 00:18:57.698 "num_base_bdevs_discovered": 1, 00:18:57.698 "num_base_bdevs_operational": 1, 00:18:57.698 "base_bdevs_list": [ 00:18:57.698 { 00:18:57.698 "name": null, 00:18:57.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.698 "is_configured": false, 00:18:57.698 "data_offset": 2048, 00:18:57.698 "data_size": 63488 00:18:57.698 }, 00:18:57.698 { 00:18:57.698 "name": "BaseBdev2", 00:18:57.698 "uuid": "33e94e7a-7a7a-43f6-8f99-6b9b1f667c10", 00:18:57.698 "is_configured": true, 00:18:57.698 "data_offset": 2048, 00:18:57.698 "data_size": 63488 00:18:57.698 } 00:18:57.698 ] 00:18:57.698 }' 00:18:57.698 12:38:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:57.698 12:38:40 -- common/autotest_common.sh@10 -- # set +x 00:18:58.265 12:38:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:58.265 12:38:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:58.265 12:38:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.265 12:38:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:58.524 12:38:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:58.524 12:38:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:58.524 12:38:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:58.524 [2024-10-01 12:38:41.026965] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:58.524 [2024-10-01 12:38:41.027124] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.524 [2024-10-01 12:38:41.027344] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.783 [2024-10-01 12:38:41.109830] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.783 [2024-10-01 12:38:41.109985] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:58.783 12:38:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:58.783 12:38:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:58.783 12:38:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.783 12:38:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.042 12:38:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:59.043 12:38:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:59.043 12:38:41 -- bdev/bdev_raid.sh@287 -- # killprocess 114952 00:18:59.043 12:38:41 -- common/autotest_common.sh@926 -- # '[' -z 114952 ']' 00:18:59.043 12:38:41 -- common/autotest_common.sh@930 -- # kill -0 114952 00:18:59.043 12:38:41 -- common/autotest_common.sh@931 -- # uname 00:18:59.043 12:38:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:59.043 12:38:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114952 00:18:59.043 12:38:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:59.043 12:38:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:59.043 12:38:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114952' 00:18:59.043 killing process with pid 114952 00:18:59.043 12:38:41 -- common/autotest_common.sh@945 -- # kill 114952 00:18:59.043 [2024-10-01 12:38:41.349812] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.043 12:38:41 -- common/autotest_common.sh@950 -- # wait 114952 00:18:59.043 [2024-10-01 12:38:41.350053] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.976 ************************************ 00:18:59.976 END TEST raid_state_function_test_sb 00:18:59.976 ************************************ 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:59.976 00:18:59.976 real 0m9.162s 00:18:59.976 user 0m15.261s 00:18:59.976 sys 0m1.415s 00:18:59.976 12:38:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.976 12:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:18:59.976 12:38:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:59.976 12:38:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.976 12:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:59.976 ************************************ 00:18:59.976 START TEST raid_superblock_test 00:18:59.976 ************************************ 00:18:59.976 12:38:42 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@357 -- # raid_pid=115269 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:59.976 12:38:42 -- bdev/bdev_raid.sh@358 -- # waitforlisten 115269 /var/tmp/spdk-raid.sock 00:18:59.976 12:38:42 -- common/autotest_common.sh@819 -- # '[' -z 115269 ']' 00:18:59.976 12:38:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:59.976 12:38:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:59.976 12:38:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:59.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:59.976 12:38:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:59.976 12:38:42 -- common/autotest_common.sh@10 -- # set +x 00:19:00.235 [2024-10-01 12:38:42.565576] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:00.235 [2024-10-01 12:38:42.565891] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115269 ] 00:19:00.235 [2024-10-01 12:38:42.731406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.493 [2024-10-01 12:38:42.878812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.493 [2024-10-01 12:38:43.022711] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.060 12:38:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:01.060 12:38:43 -- common/autotest_common.sh@852 -- # return 0 00:19:01.060 12:38:43 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:01.060 12:38:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:01.060 12:38:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:01.060 12:38:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:01.060 12:38:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:01.060 12:38:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.060 12:38:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.060 12:38:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.060 12:38:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:01.060 malloc1 00:19:01.060 12:38:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:01.318 [2024-10-01 12:38:43.725876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:01.318 [2024-10-01 12:38:43.726142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.318 [2024-10-01 12:38:43.726450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:01.318 [2024-10-01 12:38:43.726659] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.318 [2024-10-01 12:38:43.729070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.318 [2024-10-01 12:38:43.729321] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:01.318 pt1 00:19:01.318 12:38:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:01.318 12:38:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:01.318 12:38:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:01.318 12:38:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:01.318 12:38:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:01.318 12:38:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.318 12:38:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.318 12:38:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.318 12:38:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:01.575 malloc2 00:19:01.575 12:38:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.834 [2024-10-01 12:38:44.153283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.834 [2024-10-01 12:38:44.153512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.834 [2024-10-01 12:38:44.153821] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:01.834 [2024-10-01 12:38:44.154063] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.834 [2024-10-01 12:38:44.156490] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.834 [2024-10-01 12:38:44.156985] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.834 pt2 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:19:01.834 [2024-10-01 12:38:44.333399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:01.834 [2024-10-01 12:38:44.335401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:01.834 [2024-10-01 12:38:44.335772] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:19:01.834 [2024-10-01 12:38:44.335996] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:01.834 [2024-10-01 12:38:44.336274] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:01.834 [2024-10-01 12:38:44.336782] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:19:01.834 [2024-10-01 12:38:44.336975] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:19:01.834 [2024-10-01 12:38:44.337321] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.834 12:38:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.093 12:38:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.093 "name": "raid_bdev1", 00:19:02.093 "uuid": "d618bf13-7eb3-4240-b845-00bf7337f619", 00:19:02.093 "strip_size_kb": 0, 00:19:02.093 "state": "online", 00:19:02.093 "raid_level": "raid1", 00:19:02.093 "superblock": true, 00:19:02.093 "num_base_bdevs": 2, 00:19:02.093 "num_base_bdevs_discovered": 2, 00:19:02.093 "num_base_bdevs_operational": 2, 00:19:02.093 "base_bdevs_list": [ 00:19:02.093 { 00:19:02.093 "name": "pt1", 00:19:02.093 "uuid": "cccbc04f-2b88-588e-9aeb-6c44ed8e3691", 00:19:02.093 "is_configured": true, 00:19:02.093 "data_offset": 2048, 00:19:02.093 "data_size": 63488 00:19:02.093 }, 00:19:02.093 { 00:19:02.093 "name": "pt2", 00:19:02.093 "uuid": "6b7ad762-c25d-5955-b395-01239a829f62", 00:19:02.093 "is_configured": true, 00:19:02.093 "data_offset": 2048, 00:19:02.093 "data_size": 63488 00:19:02.093 } 00:19:02.093 ] 00:19:02.093 }' 00:19:02.093 12:38:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.093 12:38:44 -- common/autotest_common.sh@10 -- # set +x 00:19:02.660 12:38:45 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:02.660 12:38:45 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:02.660 [2024-10-01 12:38:45.184335] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.919 12:38:45 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d618bf13-7eb3-4240-b845-00bf7337f619 00:19:02.919 12:38:45 -- bdev/bdev_raid.sh@380 -- # '[' -z d618bf13-7eb3-4240-b845-00bf7337f619 ']' 00:19:02.919 12:38:45 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:02.919 [2024-10-01 12:38:45.355907] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.919 [2024-10-01 12:38:45.356113] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.919 [2024-10-01 12:38:45.356470] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.919 [2024-10-01 12:38:45.356700] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.919 [2024-10-01 12:38:45.356873] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:19:02.919 12:38:45 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.919 12:38:45 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:03.178 12:38:45 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:03.178 12:38:45 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:03.178 12:38:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.178 12:38:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:03.437 12:38:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.437 12:38:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:03.437 12:38:45 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:03.437 12:38:45 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:03.696 12:38:46 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:03.696 12:38:46 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:19:03.696 12:38:46 -- common/autotest_common.sh@640 -- # local es=0 00:19:03.696 12:38:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:19:03.696 12:38:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:03.696 12:38:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:03.696 12:38:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:03.696 12:38:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:03.696 12:38:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:03.696 12:38:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:03.696 12:38:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:03.696 12:38:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:03.696 12:38:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:19:03.955 [2024-10-01 12:38:46.238978] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:03.955 [2024-10-01 12:38:46.241005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:03.955 [2024-10-01 12:38:46.241229] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:03.955 [2024-10-01 12:38:46.241551] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:03.955 [2024-10-01 12:38:46.241832] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.955 [2024-10-01 12:38:46.241902] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:19:03.955 request: 00:19:03.955 { 00:19:03.955 "name": "raid_bdev1", 00:19:03.955 "raid_level": "raid1", 00:19:03.955 "base_bdevs": [ 00:19:03.955 "malloc1", 00:19:03.955 "malloc2" 00:19:03.955 ], 00:19:03.955 "superblock": false, 00:19:03.955 "method": "bdev_raid_create", 00:19:03.955 "req_id": 1 00:19:03.955 } 00:19:03.955 Got JSON-RPC error response 00:19:03.955 response: 00:19:03.955 { 00:19:03.955 "code": -17, 00:19:03.955 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:03.955 } 00:19:03.955 12:38:46 -- common/autotest_common.sh@643 -- # es=1 00:19:03.955 12:38:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:03.955 12:38:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:03.955 12:38:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:03.955 12:38:46 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.955 12:38:46 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:03.955 12:38:46 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:03.955 12:38:46 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:03.955 12:38:46 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.214 [2024-10-01 12:38:46.594389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.214 [2024-10-01 12:38:46.594664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.214 [2024-10-01 12:38:46.594981] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:19:04.214 [2024-10-01 12:38:46.595177] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.214 [2024-10-01 12:38:46.597474] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.214 [2024-10-01 12:38:46.597716] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.214 [2024-10-01 12:38:46.597997] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:04.214 [2024-10-01 12:38:46.598230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:04.214 pt1 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.214 12:38:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.473 12:38:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.473 "name": "raid_bdev1", 00:19:04.473 "uuid": "d618bf13-7eb3-4240-b845-00bf7337f619", 00:19:04.473 "strip_size_kb": 0, 00:19:04.473 "state": "configuring", 00:19:04.473 "raid_level": "raid1", 00:19:04.473 "superblock": true, 00:19:04.473 "num_base_bdevs": 2, 00:19:04.473 "num_base_bdevs_discovered": 1, 00:19:04.473 "num_base_bdevs_operational": 2, 00:19:04.473 "base_bdevs_list": [ 00:19:04.473 { 00:19:04.473 "name": "pt1", 00:19:04.473 "uuid": "cccbc04f-2b88-588e-9aeb-6c44ed8e3691", 00:19:04.473 "is_configured": true, 00:19:04.473 "data_offset": 2048, 00:19:04.473 "data_size": 63488 00:19:04.473 }, 00:19:04.473 { 00:19:04.473 "name": null, 00:19:04.473 "uuid": "6b7ad762-c25d-5955-b395-01239a829f62", 00:19:04.473 "is_configured": false, 00:19:04.473 "data_offset": 2048, 00:19:04.473 "data_size": 63488 00:19:04.473 } 00:19:04.473 ] 00:19:04.473 }' 00:19:04.473 12:38:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.473 12:38:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.042 [2024-10-01 12:38:47.469311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.042 [2024-10-01 12:38:47.469588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.042 [2024-10-01 12:38:47.469889] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:05.042 [2024-10-01 12:38:47.470090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.042 [2024-10-01 12:38:47.470662] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.042 [2024-10-01 12:38:47.470935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.042 [2024-10-01 12:38:47.471219] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:05.042 [2024-10-01 12:38:47.471427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.042 [2024-10-01 12:38:47.471713] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:19:05.042 [2024-10-01 12:38:47.471928] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:05.042 [2024-10-01 12:38:47.472207] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:05.042 [2024-10-01 12:38:47.472678] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:19:05.042 [2024-10-01 12:38:47.472876] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:19:05.042 [2024-10-01 12:38:47.473175] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.042 pt2 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.042 12:38:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.302 12:38:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.302 "name": "raid_bdev1", 00:19:05.302 "uuid": "d618bf13-7eb3-4240-b845-00bf7337f619", 00:19:05.302 "strip_size_kb": 0, 00:19:05.302 "state": "online", 00:19:05.302 "raid_level": "raid1", 00:19:05.302 "superblock": true, 00:19:05.302 "num_base_bdevs": 2, 00:19:05.302 "num_base_bdevs_discovered": 2, 00:19:05.302 "num_base_bdevs_operational": 2, 00:19:05.302 "base_bdevs_list": [ 00:19:05.302 { 00:19:05.302 "name": "pt1", 00:19:05.302 "uuid": "cccbc04f-2b88-588e-9aeb-6c44ed8e3691", 00:19:05.302 "is_configured": true, 00:19:05.302 "data_offset": 2048, 00:19:05.302 "data_size": 63488 00:19:05.302 }, 00:19:05.302 { 00:19:05.302 "name": "pt2", 00:19:05.302 "uuid": "6b7ad762-c25d-5955-b395-01239a829f62", 00:19:05.302 "is_configured": true, 00:19:05.302 "data_offset": 2048, 00:19:05.302 "data_size": 63488 00:19:05.302 } 00:19:05.302 ] 00:19:05.302 }' 00:19:05.302 12:38:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.302 12:38:47 -- common/autotest_common.sh@10 -- # set +x 00:19:05.870 12:38:48 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:05.870 12:38:48 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:05.870 [2024-10-01 12:38:48.328245] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.870 12:38:48 -- bdev/bdev_raid.sh@430 -- # '[' d618bf13-7eb3-4240-b845-00bf7337f619 '!=' d618bf13-7eb3-4240-b845-00bf7337f619 ']' 00:19:05.870 12:38:48 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:19:05.870 12:38:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:05.870 12:38:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:05.870 12:38:48 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:06.130 [2024-10-01 12:38:48.507814] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.130 12:38:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.389 12:38:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.389 "name": "raid_bdev1", 00:19:06.389 "uuid": "d618bf13-7eb3-4240-b845-00bf7337f619", 00:19:06.389 "strip_size_kb": 0, 00:19:06.389 "state": "online", 00:19:06.389 "raid_level": "raid1", 00:19:06.389 "superblock": true, 00:19:06.389 "num_base_bdevs": 2, 00:19:06.389 "num_base_bdevs_discovered": 1, 00:19:06.389 "num_base_bdevs_operational": 1, 00:19:06.389 "base_bdevs_list": [ 00:19:06.389 { 00:19:06.389 "name": null, 00:19:06.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.389 "is_configured": false, 00:19:06.389 "data_offset": 2048, 00:19:06.389 "data_size": 63488 00:19:06.389 }, 00:19:06.389 { 00:19:06.389 "name": "pt2", 00:19:06.389 "uuid": "6b7ad762-c25d-5955-b395-01239a829f62", 00:19:06.389 "is_configured": true, 00:19:06.389 "data_offset": 2048, 00:19:06.389 "data_size": 63488 00:19:06.389 } 00:19:06.389 ] 00:19:06.389 }' 00:19:06.389 12:38:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.389 12:38:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.959 12:38:49 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:06.959 [2024-10-01 12:38:49.406644] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.959 [2024-10-01 12:38:49.406852] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.959 [2024-10-01 12:38:49.407211] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.959 [2024-10-01 12:38:49.407421] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.959 [2024-10-01 12:38:49.407593] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:19:06.959 12:38:49 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.959 12:38:49 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:19:07.218 12:38:49 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:19:07.218 12:38:49 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:19:07.218 12:38:49 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:19:07.218 12:38:49 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:07.218 12:38:49 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@462 -- # i=1 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.479 [2024-10-01 12:38:49.933866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.479 [2024-10-01 12:38:49.934130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.479 [2024-10-01 12:38:49.934402] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:07.479 [2024-10-01 12:38:49.934600] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.479 [2024-10-01 12:38:49.936949] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.479 [2024-10-01 12:38:49.937222] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.479 [2024-10-01 12:38:49.937501] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:07.479 [2024-10-01 12:38:49.937720] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.479 [2024-10-01 12:38:49.937982] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:19:07.479 [2024-10-01 12:38:49.938163] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:07.479 [2024-10-01 12:38:49.938403] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:07.479 [2024-10-01 12:38:49.938861] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:19:07.479 [2024-10-01 12:38:49.939070] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:19:07.479 [2024-10-01 12:38:49.939406] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.479 pt2 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.479 12:38:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.738 12:38:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:07.738 "name": "raid_bdev1", 00:19:07.738 "uuid": "d618bf13-7eb3-4240-b845-00bf7337f619", 00:19:07.738 "strip_size_kb": 0, 00:19:07.738 "state": "online", 00:19:07.738 "raid_level": "raid1", 00:19:07.738 "superblock": true, 00:19:07.738 "num_base_bdevs": 2, 00:19:07.738 "num_base_bdevs_discovered": 1, 00:19:07.738 "num_base_bdevs_operational": 1, 00:19:07.738 "base_bdevs_list": [ 00:19:07.738 { 00:19:07.738 "name": null, 00:19:07.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.738 "is_configured": false, 00:19:07.738 "data_offset": 2048, 00:19:07.738 "data_size": 63488 00:19:07.738 }, 00:19:07.738 { 00:19:07.738 "name": "pt2", 00:19:07.738 "uuid": "6b7ad762-c25d-5955-b395-01239a829f62", 00:19:07.738 "is_configured": true, 00:19:07.738 "data_offset": 2048, 00:19:07.738 "data_size": 63488 00:19:07.738 } 00:19:07.738 ] 00:19:07.738 }' 00:19:07.738 12:38:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:07.738 12:38:50 -- common/autotest_common.sh@10 -- # set +x 00:19:08.304 12:38:50 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:19:08.304 12:38:50 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:19:08.304 12:38:50 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:08.562 [2024-10-01 12:38:50.848962] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.563 12:38:50 -- bdev/bdev_raid.sh@506 -- # '[' d618bf13-7eb3-4240-b845-00bf7337f619 '!=' d618bf13-7eb3-4240-b845-00bf7337f619 ']' 00:19:08.563 12:38:50 -- bdev/bdev_raid.sh@511 -- # killprocess 115269 00:19:08.563 12:38:50 -- common/autotest_common.sh@926 -- # '[' -z 115269 ']' 00:19:08.563 12:38:50 -- common/autotest_common.sh@930 -- # kill -0 115269 00:19:08.563 12:38:50 -- common/autotest_common.sh@931 -- # uname 00:19:08.563 12:38:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:08.563 12:38:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115269 00:19:08.563 12:38:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:08.563 12:38:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:08.563 12:38:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115269' 00:19:08.563 killing process with pid 115269 00:19:08.563 12:38:50 -- common/autotest_common.sh@945 -- # kill 115269 00:19:08.563 12:38:50 -- common/autotest_common.sh@950 -- # wait 115269 00:19:08.563 [2024-10-01 12:38:50.886297] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.563 [2024-10-01 12:38:50.886624] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.563 [2024-10-01 12:38:50.886846] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.563 [2024-10-01 12:38:50.887051] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:19:08.563 [2024-10-01 12:38:51.043615] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.985 ************************************ 00:19:09.986 END TEST raid_superblock_test 00:19:09.986 ************************************ 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:09.986 00:19:09.986 real 0m9.605s 00:19:09.986 user 0m16.198s 00:19:09.986 sys 0m1.646s 00:19:09.986 12:38:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.986 12:38:52 -- common/autotest_common.sh@10 -- # set +x 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:19:09.986 12:38:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:09.986 12:38:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:09.986 12:38:52 -- common/autotest_common.sh@10 -- # set +x 00:19:09.986 ************************************ 00:19:09.986 START TEST raid_state_function_test 00:19:09.986 ************************************ 00:19:09.986 12:38:52 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=115596 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115596' 00:19:09.986 Process raid pid: 115596 00:19:09.986 12:38:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115596 /var/tmp/spdk-raid.sock 00:19:09.986 12:38:52 -- common/autotest_common.sh@819 -- # '[' -z 115596 ']' 00:19:09.986 12:38:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:09.986 12:38:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:09.986 12:38:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:09.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:09.986 12:38:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:09.986 12:38:52 -- common/autotest_common.sh@10 -- # set +x 00:19:09.986 [2024-10-01 12:38:52.266347] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:09.986 [2024-10-01 12:38:52.266650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.986 [2024-10-01 12:38:52.422126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.245 [2024-10-01 12:38:52.574220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.245 [2024-10-01 12:38:52.721159] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.812 12:38:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:10.812 12:38:53 -- common/autotest_common.sh@852 -- # return 0 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:10.812 [2024-10-01 12:38:53.230821] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:10.812 [2024-10-01 12:38:53.231075] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:10.812 [2024-10-01 12:38:53.231157] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:10.812 [2024-10-01 12:38:53.231206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:10.812 [2024-10-01 12:38:53.231232] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:10.812 [2024-10-01 12:38:53.231292] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.812 12:38:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.071 12:38:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.071 "name": "Existed_Raid", 00:19:11.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.071 "strip_size_kb": 64, 00:19:11.071 "state": "configuring", 00:19:11.071 "raid_level": "raid0", 00:19:11.071 "superblock": false, 00:19:11.071 "num_base_bdevs": 3, 00:19:11.071 "num_base_bdevs_discovered": 0, 00:19:11.071 "num_base_bdevs_operational": 3, 00:19:11.071 "base_bdevs_list": [ 00:19:11.071 { 00:19:11.071 "name": "BaseBdev1", 00:19:11.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.071 "is_configured": false, 00:19:11.071 "data_offset": 0, 00:19:11.071 "data_size": 0 00:19:11.071 }, 00:19:11.071 { 00:19:11.071 "name": "BaseBdev2", 00:19:11.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.071 "is_configured": false, 00:19:11.071 "data_offset": 0, 00:19:11.071 "data_size": 0 00:19:11.071 }, 00:19:11.071 { 00:19:11.071 "name": "BaseBdev3", 00:19:11.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.071 "is_configured": false, 00:19:11.071 "data_offset": 0, 00:19:11.071 "data_size": 0 00:19:11.071 } 00:19:11.071 ] 00:19:11.071 }' 00:19:11.071 12:38:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.071 12:38:53 -- common/autotest_common.sh@10 -- # set +x 00:19:11.638 12:38:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:11.638 [2024-10-01 12:38:54.097522] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:11.639 [2024-10-01 12:38:54.097676] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:11.639 12:38:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:11.897 [2024-10-01 12:38:54.277267] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:11.897 [2024-10-01 12:38:54.277453] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:11.897 [2024-10-01 12:38:54.277575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.897 [2024-10-01 12:38:54.277633] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.897 [2024-10-01 12:38:54.277660] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:11.897 [2024-10-01 12:38:54.277755] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:11.897 12:38:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:12.156 [2024-10-01 12:38:54.463748] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.156 BaseBdev1 00:19:12.156 12:38:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:12.156 12:38:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:12.156 12:38:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:12.156 12:38:54 -- common/autotest_common.sh@889 -- # local i 00:19:12.156 12:38:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:12.156 12:38:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:12.156 12:38:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.156 12:38:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:12.414 [ 00:19:12.414 { 00:19:12.414 "name": "BaseBdev1", 00:19:12.414 "aliases": [ 00:19:12.414 "33ba24a4-9e98-408d-b563-7c9a341a9414" 00:19:12.414 ], 00:19:12.414 "product_name": "Malloc disk", 00:19:12.414 "block_size": 512, 00:19:12.414 "num_blocks": 65536, 00:19:12.414 "uuid": "33ba24a4-9e98-408d-b563-7c9a341a9414", 00:19:12.414 "assigned_rate_limits": { 00:19:12.414 "rw_ios_per_sec": 0, 00:19:12.414 "rw_mbytes_per_sec": 0, 00:19:12.414 "r_mbytes_per_sec": 0, 00:19:12.414 "w_mbytes_per_sec": 0 00:19:12.414 }, 00:19:12.414 "claimed": true, 00:19:12.414 "claim_type": "exclusive_write", 00:19:12.414 "zoned": false, 00:19:12.414 "supported_io_types": { 00:19:12.414 "read": true, 00:19:12.414 "write": true, 00:19:12.414 "unmap": true, 00:19:12.414 "write_zeroes": true, 00:19:12.414 "flush": true, 00:19:12.414 "reset": true, 00:19:12.414 "compare": false, 00:19:12.414 "compare_and_write": false, 00:19:12.414 "abort": true, 00:19:12.414 "nvme_admin": false, 00:19:12.414 "nvme_io": false 00:19:12.414 }, 00:19:12.414 "memory_domains": [ 00:19:12.414 { 00:19:12.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.414 "dma_device_type": 2 00:19:12.414 } 00:19:12.414 ], 00:19:12.414 "driver_specific": {} 00:19:12.414 } 00:19:12.414 ] 00:19:12.414 12:38:54 -- common/autotest_common.sh@895 -- # return 0 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.414 12:38:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.672 12:38:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.672 "name": "Existed_Raid", 00:19:12.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.672 "strip_size_kb": 64, 00:19:12.672 "state": "configuring", 00:19:12.672 "raid_level": "raid0", 00:19:12.672 "superblock": false, 00:19:12.672 "num_base_bdevs": 3, 00:19:12.672 "num_base_bdevs_discovered": 1, 00:19:12.672 "num_base_bdevs_operational": 3, 00:19:12.672 "base_bdevs_list": [ 00:19:12.672 { 00:19:12.672 "name": "BaseBdev1", 00:19:12.672 "uuid": "33ba24a4-9e98-408d-b563-7c9a341a9414", 00:19:12.672 "is_configured": true, 00:19:12.672 "data_offset": 0, 00:19:12.672 "data_size": 65536 00:19:12.672 }, 00:19:12.672 { 00:19:12.672 "name": "BaseBdev2", 00:19:12.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.672 "is_configured": false, 00:19:12.672 "data_offset": 0, 00:19:12.672 "data_size": 0 00:19:12.672 }, 00:19:12.672 { 00:19:12.672 "name": "BaseBdev3", 00:19:12.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.672 "is_configured": false, 00:19:12.672 "data_offset": 0, 00:19:12.672 "data_size": 0 00:19:12.672 } 00:19:12.672 ] 00:19:12.672 }' 00:19:12.672 12:38:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.672 12:38:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.238 12:38:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:13.238 [2024-10-01 12:38:55.662096] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:13.238 [2024-10-01 12:38:55.662289] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:13.238 12:38:55 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:13.238 12:38:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:13.496 [2024-10-01 12:38:55.841913] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.496 [2024-10-01 12:38:55.843892] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.496 [2024-10-01 12:38:55.844098] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:13.496 [2024-10-01 12:38:55.844190] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:13.496 [2024-10-01 12:38:55.844249] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.496 12:38:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.754 12:38:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.754 "name": "Existed_Raid", 00:19:13.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.754 "strip_size_kb": 64, 00:19:13.754 "state": "configuring", 00:19:13.754 "raid_level": "raid0", 00:19:13.754 "superblock": false, 00:19:13.754 "num_base_bdevs": 3, 00:19:13.754 "num_base_bdevs_discovered": 1, 00:19:13.754 "num_base_bdevs_operational": 3, 00:19:13.754 "base_bdevs_list": [ 00:19:13.754 { 00:19:13.754 "name": "BaseBdev1", 00:19:13.754 "uuid": "33ba24a4-9e98-408d-b563-7c9a341a9414", 00:19:13.754 "is_configured": true, 00:19:13.754 "data_offset": 0, 00:19:13.754 "data_size": 65536 00:19:13.754 }, 00:19:13.754 { 00:19:13.754 "name": "BaseBdev2", 00:19:13.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.754 "is_configured": false, 00:19:13.754 "data_offset": 0, 00:19:13.754 "data_size": 0 00:19:13.754 }, 00:19:13.754 { 00:19:13.754 "name": "BaseBdev3", 00:19:13.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.754 "is_configured": false, 00:19:13.754 "data_offset": 0, 00:19:13.754 "data_size": 0 00:19:13.754 } 00:19:13.754 ] 00:19:13.754 }' 00:19:13.754 12:38:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.754 12:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:14.322 12:38:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:14.322 [2024-10-01 12:38:56.797765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.322 BaseBdev2 00:19:14.322 12:38:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:14.322 12:38:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:14.322 12:38:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:14.322 12:38:56 -- common/autotest_common.sh@889 -- # local i 00:19:14.322 12:38:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:14.322 12:38:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:14.322 12:38:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.580 12:38:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:14.839 [ 00:19:14.839 { 00:19:14.839 "name": "BaseBdev2", 00:19:14.839 "aliases": [ 00:19:14.839 "94df16d3-5406-4cd9-8fd8-b5d099f1e78f" 00:19:14.839 ], 00:19:14.839 "product_name": "Malloc disk", 00:19:14.839 "block_size": 512, 00:19:14.839 "num_blocks": 65536, 00:19:14.839 "uuid": "94df16d3-5406-4cd9-8fd8-b5d099f1e78f", 00:19:14.839 "assigned_rate_limits": { 00:19:14.839 "rw_ios_per_sec": 0, 00:19:14.839 "rw_mbytes_per_sec": 0, 00:19:14.839 "r_mbytes_per_sec": 0, 00:19:14.839 "w_mbytes_per_sec": 0 00:19:14.839 }, 00:19:14.839 "claimed": true, 00:19:14.839 "claim_type": "exclusive_write", 00:19:14.839 "zoned": false, 00:19:14.839 "supported_io_types": { 00:19:14.839 "read": true, 00:19:14.839 "write": true, 00:19:14.839 "unmap": true, 00:19:14.839 "write_zeroes": true, 00:19:14.839 "flush": true, 00:19:14.839 "reset": true, 00:19:14.839 "compare": false, 00:19:14.839 "compare_and_write": false, 00:19:14.839 "abort": true, 00:19:14.839 "nvme_admin": false, 00:19:14.839 "nvme_io": false 00:19:14.839 }, 00:19:14.839 "memory_domains": [ 00:19:14.839 { 00:19:14.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.839 "dma_device_type": 2 00:19:14.839 } 00:19:14.839 ], 00:19:14.839 "driver_specific": {} 00:19:14.839 } 00:19:14.839 ] 00:19:14.839 12:38:57 -- common/autotest_common.sh@895 -- # return 0 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.839 "name": "Existed_Raid", 00:19:14.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.839 "strip_size_kb": 64, 00:19:14.839 "state": "configuring", 00:19:14.839 "raid_level": "raid0", 00:19:14.839 "superblock": false, 00:19:14.839 "num_base_bdevs": 3, 00:19:14.839 "num_base_bdevs_discovered": 2, 00:19:14.839 "num_base_bdevs_operational": 3, 00:19:14.839 "base_bdevs_list": [ 00:19:14.839 { 00:19:14.839 "name": "BaseBdev1", 00:19:14.839 "uuid": "33ba24a4-9e98-408d-b563-7c9a341a9414", 00:19:14.839 "is_configured": true, 00:19:14.839 "data_offset": 0, 00:19:14.839 "data_size": 65536 00:19:14.839 }, 00:19:14.839 { 00:19:14.839 "name": "BaseBdev2", 00:19:14.839 "uuid": "94df16d3-5406-4cd9-8fd8-b5d099f1e78f", 00:19:14.839 "is_configured": true, 00:19:14.839 "data_offset": 0, 00:19:14.839 "data_size": 65536 00:19:14.839 }, 00:19:14.839 { 00:19:14.839 "name": "BaseBdev3", 00:19:14.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.839 "is_configured": false, 00:19:14.839 "data_offset": 0, 00:19:14.839 "data_size": 0 00:19:14.839 } 00:19:14.839 ] 00:19:14.839 }' 00:19:14.839 12:38:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.839 12:38:57 -- common/autotest_common.sh@10 -- # set +x 00:19:15.408 12:38:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:15.667 [2024-10-01 12:38:58.061531] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:15.667 [2024-10-01 12:38:58.061641] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:19:15.667 [2024-10-01 12:38:58.061671] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:15.667 [2024-10-01 12:38:58.061794] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:15.667 [2024-10-01 12:38:58.062129] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:19:15.667 [2024-10-01 12:38:58.062284] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:19:15.667 [2024-10-01 12:38:58.062547] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.667 BaseBdev3 00:19:15.667 12:38:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:15.667 12:38:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:15.667 12:38:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:15.667 12:38:58 -- common/autotest_common.sh@889 -- # local i 00:19:15.667 12:38:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:15.667 12:38:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:15.667 12:38:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:15.926 12:38:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:15.926 [ 00:19:15.926 { 00:19:15.926 "name": "BaseBdev3", 00:19:15.926 "aliases": [ 00:19:15.926 "b1f3f543-a21c-4e05-b139-58d11ce020dd" 00:19:15.926 ], 00:19:15.926 "product_name": "Malloc disk", 00:19:15.926 "block_size": 512, 00:19:15.926 "num_blocks": 65536, 00:19:15.926 "uuid": "b1f3f543-a21c-4e05-b139-58d11ce020dd", 00:19:15.926 "assigned_rate_limits": { 00:19:15.926 "rw_ios_per_sec": 0, 00:19:15.926 "rw_mbytes_per_sec": 0, 00:19:15.926 "r_mbytes_per_sec": 0, 00:19:15.926 "w_mbytes_per_sec": 0 00:19:15.926 }, 00:19:15.926 "claimed": true, 00:19:15.926 "claim_type": "exclusive_write", 00:19:15.926 "zoned": false, 00:19:15.926 "supported_io_types": { 00:19:15.926 "read": true, 00:19:15.926 "write": true, 00:19:15.926 "unmap": true, 00:19:15.926 "write_zeroes": true, 00:19:15.926 "flush": true, 00:19:15.926 "reset": true, 00:19:15.926 "compare": false, 00:19:15.926 "compare_and_write": false, 00:19:15.926 "abort": true, 00:19:15.926 "nvme_admin": false, 00:19:15.926 "nvme_io": false 00:19:15.926 }, 00:19:15.926 "memory_domains": [ 00:19:15.926 { 00:19:15.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.926 "dma_device_type": 2 00:19:15.926 } 00:19:15.926 ], 00:19:15.926 "driver_specific": {} 00:19:15.926 } 00:19:15.926 ] 00:19:15.926 12:38:58 -- common/autotest_common.sh@895 -- # return 0 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.926 12:38:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.185 12:38:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.186 "name": "Existed_Raid", 00:19:16.186 "uuid": "b8ce6b01-dab4-443e-a6d4-d2365088d98a", 00:19:16.186 "strip_size_kb": 64, 00:19:16.186 "state": "online", 00:19:16.186 "raid_level": "raid0", 00:19:16.186 "superblock": false, 00:19:16.186 "num_base_bdevs": 3, 00:19:16.186 "num_base_bdevs_discovered": 3, 00:19:16.186 "num_base_bdevs_operational": 3, 00:19:16.186 "base_bdevs_list": [ 00:19:16.186 { 00:19:16.186 "name": "BaseBdev1", 00:19:16.186 "uuid": "33ba24a4-9e98-408d-b563-7c9a341a9414", 00:19:16.186 "is_configured": true, 00:19:16.186 "data_offset": 0, 00:19:16.186 "data_size": 65536 00:19:16.186 }, 00:19:16.186 { 00:19:16.186 "name": "BaseBdev2", 00:19:16.186 "uuid": "94df16d3-5406-4cd9-8fd8-b5d099f1e78f", 00:19:16.186 "is_configured": true, 00:19:16.186 "data_offset": 0, 00:19:16.186 "data_size": 65536 00:19:16.186 }, 00:19:16.186 { 00:19:16.186 "name": "BaseBdev3", 00:19:16.186 "uuid": "b1f3f543-a21c-4e05-b139-58d11ce020dd", 00:19:16.186 "is_configured": true, 00:19:16.186 "data_offset": 0, 00:19:16.186 "data_size": 65536 00:19:16.186 } 00:19:16.186 ] 00:19:16.186 }' 00:19:16.186 12:38:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.186 12:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:16.753 12:38:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:17.012 [2024-10-01 12:38:59.367779] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:17.012 [2024-10-01 12:38:59.367933] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.012 [2024-10-01 12:38:59.368139] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.012 12:38:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:17.012 12:38:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.013 12:38:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.272 12:38:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.272 "name": "Existed_Raid", 00:19:17.272 "uuid": "b8ce6b01-dab4-443e-a6d4-d2365088d98a", 00:19:17.272 "strip_size_kb": 64, 00:19:17.272 "state": "offline", 00:19:17.272 "raid_level": "raid0", 00:19:17.272 "superblock": false, 00:19:17.272 "num_base_bdevs": 3, 00:19:17.272 "num_base_bdevs_discovered": 2, 00:19:17.272 "num_base_bdevs_operational": 2, 00:19:17.272 "base_bdevs_list": [ 00:19:17.272 { 00:19:17.272 "name": null, 00:19:17.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.272 "is_configured": false, 00:19:17.272 "data_offset": 0, 00:19:17.272 "data_size": 65536 00:19:17.272 }, 00:19:17.272 { 00:19:17.272 "name": "BaseBdev2", 00:19:17.272 "uuid": "94df16d3-5406-4cd9-8fd8-b5d099f1e78f", 00:19:17.272 "is_configured": true, 00:19:17.272 "data_offset": 0, 00:19:17.272 "data_size": 65536 00:19:17.272 }, 00:19:17.272 { 00:19:17.272 "name": "BaseBdev3", 00:19:17.272 "uuid": "b1f3f543-a21c-4e05-b139-58d11ce020dd", 00:19:17.272 "is_configured": true, 00:19:17.272 "data_offset": 0, 00:19:17.272 "data_size": 65536 00:19:17.272 } 00:19:17.272 ] 00:19:17.272 }' 00:19:17.272 12:38:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.272 12:38:59 -- common/autotest_common.sh@10 -- # set +x 00:19:17.840 12:39:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:17.840 12:39:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:17.840 12:39:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.840 12:39:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:18.099 12:39:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:18.099 12:39:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.099 12:39:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:18.099 [2024-10-01 12:39:00.561083] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:18.359 12:39:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:18.359 12:39:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:18.359 12:39:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.359 12:39:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:18.359 12:39:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:18.359 12:39:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.359 12:39:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:18.618 [2024-10-01 12:39:01.077625] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:18.618 [2024-10-01 12:39:01.077873] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:19:18.876 12:39:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:18.876 12:39:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:18.876 12:39:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.876 12:39:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:18.876 12:39:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:18.876 12:39:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:18.876 12:39:01 -- bdev/bdev_raid.sh@287 -- # killprocess 115596 00:19:18.876 12:39:01 -- common/autotest_common.sh@926 -- # '[' -z 115596 ']' 00:19:18.876 12:39:01 -- common/autotest_common.sh@930 -- # kill -0 115596 00:19:18.876 12:39:01 -- common/autotest_common.sh@931 -- # uname 00:19:18.876 12:39:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:18.876 12:39:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115596 00:19:18.876 killing process with pid 115596 00:19:18.876 12:39:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:18.876 12:39:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:18.876 12:39:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115596' 00:19:18.876 12:39:01 -- common/autotest_common.sh@945 -- # kill 115596 00:19:18.876 [2024-10-01 12:39:01.398672] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:18.876 12:39:01 -- common/autotest_common.sh@950 -- # wait 115596 00:19:18.876 [2024-10-01 12:39:01.398777] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:20.317 12:39:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:20.318 00:19:20.318 real 0m10.295s 00:19:20.318 user 0m17.336s 00:19:20.318 sys 0m1.632s 00:19:20.318 12:39:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.318 ************************************ 00:19:20.318 12:39:02 -- common/autotest_common.sh@10 -- # set +x 00:19:20.318 END TEST raid_state_function_test 00:19:20.318 ************************************ 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:19:20.318 12:39:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:20.318 12:39:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:20.318 12:39:02 -- common/autotest_common.sh@10 -- # set +x 00:19:20.318 ************************************ 00:19:20.318 START TEST raid_state_function_test_sb 00:19:20.318 ************************************ 00:19:20.318 12:39:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=115962 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:20.318 Process raid pid: 115962 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115962' 00:19:20.318 12:39:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115962 /var/tmp/spdk-raid.sock 00:19:20.318 12:39:02 -- common/autotest_common.sh@819 -- # '[' -z 115962 ']' 00:19:20.318 12:39:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:20.318 12:39:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:20.318 12:39:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:20.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:20.318 12:39:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:20.318 12:39:02 -- common/autotest_common.sh@10 -- # set +x 00:19:20.318 [2024-10-01 12:39:02.644741] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:20.318 [2024-10-01 12:39:02.645017] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.318 [2024-10-01 12:39:02.812161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.577 [2024-10-01 12:39:02.970377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.836 [2024-10-01 12:39:03.123518] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:21.096 12:39:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:21.096 12:39:03 -- common/autotest_common.sh@852 -- # return 0 00:19:21.096 12:39:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:21.096 [2024-10-01 12:39:03.610881] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:21.096 [2024-10-01 12:39:03.611124] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:21.096 [2024-10-01 12:39:03.611204] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:21.096 [2024-10-01 12:39:03.611362] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:21.096 [2024-10-01 12:39:03.611433] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:21.096 [2024-10-01 12:39:03.611501] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:21.096 12:39:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:21.096 12:39:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:21.096 12:39:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:21.096 12:39:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:21.096 12:39:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:21.096 12:39:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:21.355 12:39:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.355 12:39:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.355 12:39:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.355 12:39:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.355 12:39:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.355 12:39:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.355 12:39:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.355 "name": "Existed_Raid", 00:19:21.355 "uuid": "60dd8f29-aecc-4285-b983-208e54c9158e", 00:19:21.355 "strip_size_kb": 64, 00:19:21.356 "state": "configuring", 00:19:21.356 "raid_level": "raid0", 00:19:21.356 "superblock": true, 00:19:21.356 "num_base_bdevs": 3, 00:19:21.356 "num_base_bdevs_discovered": 0, 00:19:21.356 "num_base_bdevs_operational": 3, 00:19:21.356 "base_bdevs_list": [ 00:19:21.356 { 00:19:21.356 "name": "BaseBdev1", 00:19:21.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.356 "is_configured": false, 00:19:21.356 "data_offset": 0, 00:19:21.356 "data_size": 0 00:19:21.356 }, 00:19:21.356 { 00:19:21.356 "name": "BaseBdev2", 00:19:21.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.356 "is_configured": false, 00:19:21.356 "data_offset": 0, 00:19:21.356 "data_size": 0 00:19:21.356 }, 00:19:21.356 { 00:19:21.356 "name": "BaseBdev3", 00:19:21.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.356 "is_configured": false, 00:19:21.356 "data_offset": 0, 00:19:21.356 "data_size": 0 00:19:21.356 } 00:19:21.356 ] 00:19:21.356 }' 00:19:21.356 12:39:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.356 12:39:03 -- common/autotest_common.sh@10 -- # set +x 00:19:21.924 12:39:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:22.184 [2024-10-01 12:39:04.501494] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:22.184 [2024-10-01 12:39:04.501662] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:22.184 12:39:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:22.184 [2024-10-01 12:39:04.681292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:22.184 [2024-10-01 12:39:04.681491] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:22.184 [2024-10-01 12:39:04.681570] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:22.184 [2024-10-01 12:39:04.681691] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:22.184 [2024-10-01 12:39:04.681773] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:22.184 [2024-10-01 12:39:04.681828] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:22.184 12:39:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:22.444 [2024-10-01 12:39:04.891589] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:22.444 BaseBdev1 00:19:22.444 12:39:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:22.444 12:39:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:22.444 12:39:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:22.444 12:39:04 -- common/autotest_common.sh@889 -- # local i 00:19:22.444 12:39:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:22.444 12:39:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:22.444 12:39:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:22.703 12:39:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:22.963 [ 00:19:22.963 { 00:19:22.963 "name": "BaseBdev1", 00:19:22.963 "aliases": [ 00:19:22.963 "4d534b72-12ce-47b2-903c-56aa5443f205" 00:19:22.963 ], 00:19:22.963 "product_name": "Malloc disk", 00:19:22.963 "block_size": 512, 00:19:22.963 "num_blocks": 65536, 00:19:22.963 "uuid": "4d534b72-12ce-47b2-903c-56aa5443f205", 00:19:22.963 "assigned_rate_limits": { 00:19:22.963 "rw_ios_per_sec": 0, 00:19:22.963 "rw_mbytes_per_sec": 0, 00:19:22.963 "r_mbytes_per_sec": 0, 00:19:22.963 "w_mbytes_per_sec": 0 00:19:22.963 }, 00:19:22.963 "claimed": true, 00:19:22.963 "claim_type": "exclusive_write", 00:19:22.963 "zoned": false, 00:19:22.963 "supported_io_types": { 00:19:22.963 "read": true, 00:19:22.963 "write": true, 00:19:22.963 "unmap": true, 00:19:22.963 "write_zeroes": true, 00:19:22.963 "flush": true, 00:19:22.963 "reset": true, 00:19:22.963 "compare": false, 00:19:22.963 "compare_and_write": false, 00:19:22.963 "abort": true, 00:19:22.963 "nvme_admin": false, 00:19:22.963 "nvme_io": false 00:19:22.963 }, 00:19:22.963 "memory_domains": [ 00:19:22.963 { 00:19:22.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.963 "dma_device_type": 2 00:19:22.963 } 00:19:22.963 ], 00:19:22.963 "driver_specific": {} 00:19:22.963 } 00:19:22.963 ] 00:19:22.963 12:39:05 -- common/autotest_common.sh@895 -- # return 0 00:19:22.963 12:39:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:22.963 12:39:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:22.963 12:39:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:22.963 12:39:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:22.963 12:39:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:22.963 12:39:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:22.963 12:39:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.963 12:39:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.963 12:39:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.963 12:39:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.964 12:39:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.964 12:39:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.964 12:39:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:22.964 "name": "Existed_Raid", 00:19:22.964 "uuid": "ac97a964-a966-45b6-9836-427870515e40", 00:19:22.964 "strip_size_kb": 64, 00:19:22.964 "state": "configuring", 00:19:22.964 "raid_level": "raid0", 00:19:22.964 "superblock": true, 00:19:22.964 "num_base_bdevs": 3, 00:19:22.964 "num_base_bdevs_discovered": 1, 00:19:22.964 "num_base_bdevs_operational": 3, 00:19:22.964 "base_bdevs_list": [ 00:19:22.964 { 00:19:22.964 "name": "BaseBdev1", 00:19:22.964 "uuid": "4d534b72-12ce-47b2-903c-56aa5443f205", 00:19:22.964 "is_configured": true, 00:19:22.964 "data_offset": 2048, 00:19:22.964 "data_size": 63488 00:19:22.964 }, 00:19:22.964 { 00:19:22.964 "name": "BaseBdev2", 00:19:22.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.964 "is_configured": false, 00:19:22.964 "data_offset": 0, 00:19:22.964 "data_size": 0 00:19:22.964 }, 00:19:22.964 { 00:19:22.964 "name": "BaseBdev3", 00:19:22.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.964 "is_configured": false, 00:19:22.964 "data_offset": 0, 00:19:22.964 "data_size": 0 00:19:22.964 } 00:19:22.964 ] 00:19:22.964 }' 00:19:22.964 12:39:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:22.964 12:39:05 -- common/autotest_common.sh@10 -- # set +x 00:19:23.531 12:39:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:23.789 [2024-10-01 12:39:06.141897] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:23.789 [2024-10-01 12:39:06.142071] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:23.789 12:39:06 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:23.789 12:39:06 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:24.049 12:39:06 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:24.309 BaseBdev1 00:19:24.309 12:39:06 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:24.309 12:39:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:24.309 12:39:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:24.309 12:39:06 -- common/autotest_common.sh@889 -- # local i 00:19:24.309 12:39:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:24.309 12:39:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:24.309 12:39:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:24.309 12:39:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:24.568 [ 00:19:24.568 { 00:19:24.568 "name": "BaseBdev1", 00:19:24.568 "aliases": [ 00:19:24.568 "d343c21b-1ab5-44a0-bb8a-d5c3a78d8e11" 00:19:24.569 ], 00:19:24.569 "product_name": "Malloc disk", 00:19:24.569 "block_size": 512, 00:19:24.569 "num_blocks": 65536, 00:19:24.569 "uuid": "d343c21b-1ab5-44a0-bb8a-d5c3a78d8e11", 00:19:24.569 "assigned_rate_limits": { 00:19:24.569 "rw_ios_per_sec": 0, 00:19:24.569 "rw_mbytes_per_sec": 0, 00:19:24.569 "r_mbytes_per_sec": 0, 00:19:24.569 "w_mbytes_per_sec": 0 00:19:24.569 }, 00:19:24.569 "claimed": false, 00:19:24.569 "zoned": false, 00:19:24.569 "supported_io_types": { 00:19:24.569 "read": true, 00:19:24.569 "write": true, 00:19:24.569 "unmap": true, 00:19:24.569 "write_zeroes": true, 00:19:24.569 "flush": true, 00:19:24.569 "reset": true, 00:19:24.569 "compare": false, 00:19:24.569 "compare_and_write": false, 00:19:24.569 "abort": true, 00:19:24.569 "nvme_admin": false, 00:19:24.569 "nvme_io": false 00:19:24.569 }, 00:19:24.569 "memory_domains": [ 00:19:24.569 { 00:19:24.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.569 "dma_device_type": 2 00:19:24.569 } 00:19:24.569 ], 00:19:24.569 "driver_specific": {} 00:19:24.569 } 00:19:24.569 ] 00:19:24.569 12:39:06 -- common/autotest_common.sh@895 -- # return 0 00:19:24.569 12:39:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:24.829 [2024-10-01 12:39:07.151542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.829 [2024-10-01 12:39:07.153468] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.829 [2024-10-01 12:39:07.153630] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.829 [2024-10-01 12:39:07.153752] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:24.829 [2024-10-01 12:39:07.153809] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.829 "name": "Existed_Raid", 00:19:24.829 "uuid": "edbd8d62-2511-4b4a-a107-fd804d109f46", 00:19:24.829 "strip_size_kb": 64, 00:19:24.829 "state": "configuring", 00:19:24.829 "raid_level": "raid0", 00:19:24.829 "superblock": true, 00:19:24.829 "num_base_bdevs": 3, 00:19:24.829 "num_base_bdevs_discovered": 1, 00:19:24.829 "num_base_bdevs_operational": 3, 00:19:24.829 "base_bdevs_list": [ 00:19:24.829 { 00:19:24.829 "name": "BaseBdev1", 00:19:24.829 "uuid": "d343c21b-1ab5-44a0-bb8a-d5c3a78d8e11", 00:19:24.829 "is_configured": true, 00:19:24.829 "data_offset": 2048, 00:19:24.829 "data_size": 63488 00:19:24.829 }, 00:19:24.829 { 00:19:24.829 "name": "BaseBdev2", 00:19:24.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.829 "is_configured": false, 00:19:24.829 "data_offset": 0, 00:19:24.829 "data_size": 0 00:19:24.829 }, 00:19:24.829 { 00:19:24.829 "name": "BaseBdev3", 00:19:24.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.829 "is_configured": false, 00:19:24.829 "data_offset": 0, 00:19:24.829 "data_size": 0 00:19:24.829 } 00:19:24.829 ] 00:19:24.829 }' 00:19:24.829 12:39:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.829 12:39:07 -- common/autotest_common.sh@10 -- # set +x 00:19:25.398 12:39:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:25.658 [2024-10-01 12:39:08.082164] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.658 BaseBdev2 00:19:25.658 12:39:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:25.658 12:39:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:25.658 12:39:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:25.658 12:39:08 -- common/autotest_common.sh@889 -- # local i 00:19:25.658 12:39:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:25.658 12:39:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:25.658 12:39:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.918 12:39:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:26.177 [ 00:19:26.177 { 00:19:26.177 "name": "BaseBdev2", 00:19:26.177 "aliases": [ 00:19:26.177 "b566be00-67c4-41c8-8839-16613ea7078c" 00:19:26.177 ], 00:19:26.177 "product_name": "Malloc disk", 00:19:26.177 "block_size": 512, 00:19:26.177 "num_blocks": 65536, 00:19:26.177 "uuid": "b566be00-67c4-41c8-8839-16613ea7078c", 00:19:26.177 "assigned_rate_limits": { 00:19:26.177 "rw_ios_per_sec": 0, 00:19:26.177 "rw_mbytes_per_sec": 0, 00:19:26.177 "r_mbytes_per_sec": 0, 00:19:26.177 "w_mbytes_per_sec": 0 00:19:26.177 }, 00:19:26.177 "claimed": true, 00:19:26.177 "claim_type": "exclusive_write", 00:19:26.177 "zoned": false, 00:19:26.177 "supported_io_types": { 00:19:26.177 "read": true, 00:19:26.177 "write": true, 00:19:26.177 "unmap": true, 00:19:26.177 "write_zeroes": true, 00:19:26.177 "flush": true, 00:19:26.177 "reset": true, 00:19:26.177 "compare": false, 00:19:26.177 "compare_and_write": false, 00:19:26.177 "abort": true, 00:19:26.177 "nvme_admin": false, 00:19:26.177 "nvme_io": false 00:19:26.177 }, 00:19:26.177 "memory_domains": [ 00:19:26.177 { 00:19:26.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.177 "dma_device_type": 2 00:19:26.177 } 00:19:26.177 ], 00:19:26.177 "driver_specific": {} 00:19:26.177 } 00:19:26.177 ] 00:19:26.177 12:39:08 -- common/autotest_common.sh@895 -- # return 0 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.177 "name": "Existed_Raid", 00:19:26.177 "uuid": "edbd8d62-2511-4b4a-a107-fd804d109f46", 00:19:26.177 "strip_size_kb": 64, 00:19:26.177 "state": "configuring", 00:19:26.177 "raid_level": "raid0", 00:19:26.177 "superblock": true, 00:19:26.177 "num_base_bdevs": 3, 00:19:26.177 "num_base_bdevs_discovered": 2, 00:19:26.177 "num_base_bdevs_operational": 3, 00:19:26.177 "base_bdevs_list": [ 00:19:26.177 { 00:19:26.177 "name": "BaseBdev1", 00:19:26.177 "uuid": "d343c21b-1ab5-44a0-bb8a-d5c3a78d8e11", 00:19:26.177 "is_configured": true, 00:19:26.177 "data_offset": 2048, 00:19:26.177 "data_size": 63488 00:19:26.177 }, 00:19:26.177 { 00:19:26.177 "name": "BaseBdev2", 00:19:26.177 "uuid": "b566be00-67c4-41c8-8839-16613ea7078c", 00:19:26.177 "is_configured": true, 00:19:26.177 "data_offset": 2048, 00:19:26.177 "data_size": 63488 00:19:26.177 }, 00:19:26.177 { 00:19:26.177 "name": "BaseBdev3", 00:19:26.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.177 "is_configured": false, 00:19:26.177 "data_offset": 0, 00:19:26.177 "data_size": 0 00:19:26.177 } 00:19:26.177 ] 00:19:26.177 }' 00:19:26.177 12:39:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.177 12:39:08 -- common/autotest_common.sh@10 -- # set +x 00:19:26.745 12:39:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:27.005 [2024-10-01 12:39:09.455184] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:27.005 [2024-10-01 12:39:09.455726] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:27.005 [2024-10-01 12:39:09.455865] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:27.005 [2024-10-01 12:39:09.456108] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:27.005 [2024-10-01 12:39:09.456506] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:27.005 [2024-10-01 12:39:09.456559] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:27.005 [2024-10-01 12:39:09.456808] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.005 BaseBdev3 00:19:27.005 12:39:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:27.005 12:39:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:27.005 12:39:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:27.005 12:39:09 -- common/autotest_common.sh@889 -- # local i 00:19:27.005 12:39:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:27.005 12:39:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:27.005 12:39:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:27.264 12:39:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:27.524 [ 00:19:27.524 { 00:19:27.524 "name": "BaseBdev3", 00:19:27.524 "aliases": [ 00:19:27.524 "fbc97c40-1ce2-4388-b6df-615ece4c5b68" 00:19:27.524 ], 00:19:27.524 "product_name": "Malloc disk", 00:19:27.524 "block_size": 512, 00:19:27.524 "num_blocks": 65536, 00:19:27.524 "uuid": "fbc97c40-1ce2-4388-b6df-615ece4c5b68", 00:19:27.524 "assigned_rate_limits": { 00:19:27.524 "rw_ios_per_sec": 0, 00:19:27.524 "rw_mbytes_per_sec": 0, 00:19:27.524 "r_mbytes_per_sec": 0, 00:19:27.524 "w_mbytes_per_sec": 0 00:19:27.524 }, 00:19:27.524 "claimed": true, 00:19:27.524 "claim_type": "exclusive_write", 00:19:27.524 "zoned": false, 00:19:27.524 "supported_io_types": { 00:19:27.524 "read": true, 00:19:27.524 "write": true, 00:19:27.524 "unmap": true, 00:19:27.524 "write_zeroes": true, 00:19:27.524 "flush": true, 00:19:27.524 "reset": true, 00:19:27.524 "compare": false, 00:19:27.524 "compare_and_write": false, 00:19:27.524 "abort": true, 00:19:27.524 "nvme_admin": false, 00:19:27.524 "nvme_io": false 00:19:27.524 }, 00:19:27.524 "memory_domains": [ 00:19:27.524 { 00:19:27.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.525 "dma_device_type": 2 00:19:27.525 } 00:19:27.525 ], 00:19:27.525 "driver_specific": {} 00:19:27.525 } 00:19:27.525 ] 00:19:27.525 12:39:09 -- common/autotest_common.sh@895 -- # return 0 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.525 12:39:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.525 12:39:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.525 "name": "Existed_Raid", 00:19:27.525 "uuid": "edbd8d62-2511-4b4a-a107-fd804d109f46", 00:19:27.525 "strip_size_kb": 64, 00:19:27.525 "state": "online", 00:19:27.525 "raid_level": "raid0", 00:19:27.525 "superblock": true, 00:19:27.525 "num_base_bdevs": 3, 00:19:27.525 "num_base_bdevs_discovered": 3, 00:19:27.525 "num_base_bdevs_operational": 3, 00:19:27.525 "base_bdevs_list": [ 00:19:27.525 { 00:19:27.525 "name": "BaseBdev1", 00:19:27.525 "uuid": "d343c21b-1ab5-44a0-bb8a-d5c3a78d8e11", 00:19:27.525 "is_configured": true, 00:19:27.525 "data_offset": 2048, 00:19:27.525 "data_size": 63488 00:19:27.525 }, 00:19:27.525 { 00:19:27.525 "name": "BaseBdev2", 00:19:27.525 "uuid": "b566be00-67c4-41c8-8839-16613ea7078c", 00:19:27.525 "is_configured": true, 00:19:27.525 "data_offset": 2048, 00:19:27.525 "data_size": 63488 00:19:27.525 }, 00:19:27.525 { 00:19:27.525 "name": "BaseBdev3", 00:19:27.525 "uuid": "fbc97c40-1ce2-4388-b6df-615ece4c5b68", 00:19:27.525 "is_configured": true, 00:19:27.525 "data_offset": 2048, 00:19:27.525 "data_size": 63488 00:19:27.525 } 00:19:27.525 ] 00:19:27.525 }' 00:19:27.525 12:39:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.525 12:39:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.119 12:39:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:28.378 [2024-10-01 12:39:10.717421] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:28.378 [2024-10-01 12:39:10.717598] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.378 [2024-10-01 12:39:10.717809] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.378 12:39:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.637 12:39:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.637 "name": "Existed_Raid", 00:19:28.637 "uuid": "edbd8d62-2511-4b4a-a107-fd804d109f46", 00:19:28.637 "strip_size_kb": 64, 00:19:28.637 "state": "offline", 00:19:28.637 "raid_level": "raid0", 00:19:28.637 "superblock": true, 00:19:28.637 "num_base_bdevs": 3, 00:19:28.637 "num_base_bdevs_discovered": 2, 00:19:28.637 "num_base_bdevs_operational": 2, 00:19:28.637 "base_bdevs_list": [ 00:19:28.638 { 00:19:28.638 "name": null, 00:19:28.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.638 "is_configured": false, 00:19:28.638 "data_offset": 2048, 00:19:28.638 "data_size": 63488 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "name": "BaseBdev2", 00:19:28.638 "uuid": "b566be00-67c4-41c8-8839-16613ea7078c", 00:19:28.638 "is_configured": true, 00:19:28.638 "data_offset": 2048, 00:19:28.638 "data_size": 63488 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "name": "BaseBdev3", 00:19:28.638 "uuid": "fbc97c40-1ce2-4388-b6df-615ece4c5b68", 00:19:28.638 "is_configured": true, 00:19:28.638 "data_offset": 2048, 00:19:28.638 "data_size": 63488 00:19:28.638 } 00:19:28.638 ] 00:19:28.638 }' 00:19:28.638 12:39:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.638 12:39:10 -- common/autotest_common.sh@10 -- # set +x 00:19:29.206 12:39:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:29.206 12:39:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:29.206 12:39:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.206 12:39:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:29.466 12:39:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:29.466 12:39:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:29.466 12:39:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:29.466 [2024-10-01 12:39:11.909120] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:29.725 12:39:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:29.725 12:39:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:29.725 12:39:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.725 12:39:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:29.725 12:39:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:29.725 12:39:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:29.725 12:39:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:29.985 [2024-10-01 12:39:12.348151] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:29.985 [2024-10-01 12:39:12.348376] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:19:29.985 12:39:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:29.985 12:39:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:29.985 12:39:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.985 12:39:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:30.244 12:39:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:30.244 12:39:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:30.244 12:39:12 -- bdev/bdev_raid.sh@287 -- # killprocess 115962 00:19:30.244 12:39:12 -- common/autotest_common.sh@926 -- # '[' -z 115962 ']' 00:19:30.244 12:39:12 -- common/autotest_common.sh@930 -- # kill -0 115962 00:19:30.244 12:39:12 -- common/autotest_common.sh@931 -- # uname 00:19:30.244 12:39:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:30.244 12:39:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115962 00:19:30.244 killing process with pid 115962 00:19:30.244 12:39:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:30.244 12:39:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:30.244 12:39:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115962' 00:19:30.244 12:39:12 -- common/autotest_common.sh@945 -- # kill 115962 00:19:30.244 [2024-10-01 12:39:12.682358] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:30.244 12:39:12 -- common/autotest_common.sh@950 -- # wait 115962 00:19:30.244 [2024-10-01 12:39:12.682471] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:31.656 ************************************ 00:19:31.656 END TEST raid_state_function_test_sb 00:19:31.656 ************************************ 00:19:31.656 12:39:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:31.656 00:19:31.656 real 0m11.192s 00:19:31.656 user 0m18.839s 00:19:31.656 sys 0m1.804s 00:19:31.656 12:39:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:31.656 12:39:13 -- common/autotest_common.sh@10 -- # set +x 00:19:31.656 12:39:13 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:19:31.656 12:39:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:31.656 12:39:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:31.656 12:39:13 -- common/autotest_common.sh@10 -- # set +x 00:19:31.656 ************************************ 00:19:31.656 START TEST raid_superblock_test 00:19:31.656 ************************************ 00:19:31.656 12:39:13 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=116333 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 116333 /var/tmp/spdk-raid.sock 00:19:31.657 12:39:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:31.657 12:39:13 -- common/autotest_common.sh@819 -- # '[' -z 116333 ']' 00:19:31.657 12:39:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:31.657 12:39:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:31.657 12:39:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:31.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:31.657 12:39:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:31.657 12:39:13 -- common/autotest_common.sh@10 -- # set +x 00:19:31.657 [2024-10-01 12:39:13.918366] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:31.657 [2024-10-01 12:39:13.918642] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116333 ] 00:19:31.657 [2024-10-01 12:39:14.082264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.915 [2024-10-01 12:39:14.269735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.174 [2024-10-01 12:39:14.449895] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.432 12:39:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:32.432 12:39:14 -- common/autotest_common.sh@852 -- # return 0 00:19:32.432 12:39:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:32.432 12:39:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:32.432 12:39:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:32.432 12:39:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:32.432 12:39:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:32.432 12:39:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:32.432 12:39:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:32.432 12:39:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:32.432 12:39:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:32.432 malloc1 00:19:32.432 12:39:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:32.691 [2024-10-01 12:39:15.054295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:32.691 [2024-10-01 12:39:15.054551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.691 [2024-10-01 12:39:15.054625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:32.691 [2024-10-01 12:39:15.054752] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.691 [2024-10-01 12:39:15.057400] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.691 [2024-10-01 12:39:15.057582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:32.691 pt1 00:19:32.691 12:39:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:32.691 12:39:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:32.691 12:39:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:32.691 12:39:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:32.691 12:39:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:32.691 12:39:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:32.691 12:39:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:32.691 12:39:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:32.691 12:39:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:32.950 malloc2 00:19:32.950 12:39:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:32.950 [2024-10-01 12:39:15.478049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:32.950 [2024-10-01 12:39:15.478260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.950 [2024-10-01 12:39:15.478342] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:32.950 [2024-10-01 12:39:15.478488] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.950 [2024-10-01 12:39:15.480991] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.950 [2024-10-01 12:39:15.481163] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:32.950 pt2 00:19:33.209 12:39:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:33.209 12:39:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:33.209 12:39:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:33.209 12:39:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:33.209 12:39:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:33.209 12:39:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:33.209 12:39:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:33.209 12:39:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:33.209 12:39:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:33.209 malloc3 00:19:33.209 12:39:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:33.468 [2024-10-01 12:39:15.853314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:33.468 [2024-10-01 12:39:15.853502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.468 [2024-10-01 12:39:15.853582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:33.468 [2024-10-01 12:39:15.853692] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.468 [2024-10-01 12:39:15.856128] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.468 [2024-10-01 12:39:15.856303] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:33.468 pt3 00:19:33.468 12:39:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:33.468 12:39:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:33.468 12:39:15 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:33.726 [2024-10-01 12:39:16.037101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:33.726 [2024-10-01 12:39:16.039290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:33.726 [2024-10-01 12:39:16.039471] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:33.726 [2024-10-01 12:39:16.039673] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:19:33.726 [2024-10-01 12:39:16.039953] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:33.726 [2024-10-01 12:39:16.040126] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:33.726 [2024-10-01 12:39:16.040576] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:19:33.726 [2024-10-01 12:39:16.040687] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:19:33.726 [2024-10-01 12:39:16.040918] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.726 12:39:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.726 "name": "raid_bdev1", 00:19:33.726 "uuid": "f160be46-4e8d-44f2-923a-7ed530063ef8", 00:19:33.726 "strip_size_kb": 64, 00:19:33.726 "state": "online", 00:19:33.726 "raid_level": "raid0", 00:19:33.726 "superblock": true, 00:19:33.726 "num_base_bdevs": 3, 00:19:33.726 "num_base_bdevs_discovered": 3, 00:19:33.726 "num_base_bdevs_operational": 3, 00:19:33.726 "base_bdevs_list": [ 00:19:33.727 { 00:19:33.727 "name": "pt1", 00:19:33.727 "uuid": "0dc9544c-4bc4-5ddc-a041-80f87cf210ff", 00:19:33.727 "is_configured": true, 00:19:33.727 "data_offset": 2048, 00:19:33.727 "data_size": 63488 00:19:33.727 }, 00:19:33.727 { 00:19:33.727 "name": "pt2", 00:19:33.727 "uuid": "e05ac16e-ad9e-5327-991b-24df1d8188e0", 00:19:33.727 "is_configured": true, 00:19:33.727 "data_offset": 2048, 00:19:33.727 "data_size": 63488 00:19:33.727 }, 00:19:33.727 { 00:19:33.727 "name": "pt3", 00:19:33.727 "uuid": "7fb87969-20c4-5078-835e-830f3f9a1860", 00:19:33.727 "is_configured": true, 00:19:33.727 "data_offset": 2048, 00:19:33.727 "data_size": 63488 00:19:33.727 } 00:19:33.727 ] 00:19:33.727 }' 00:19:33.727 12:39:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.727 12:39:16 -- common/autotest_common.sh@10 -- # set +x 00:19:34.293 12:39:16 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:34.293 12:39:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:34.552 [2024-10-01 12:39:16.948126] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.552 12:39:16 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f160be46-4e8d-44f2-923a-7ed530063ef8 00:19:34.552 12:39:16 -- bdev/bdev_raid.sh@380 -- # '[' -z f160be46-4e8d-44f2-923a-7ed530063ef8 ']' 00:19:34.552 12:39:16 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:34.811 [2024-10-01 12:39:17.111749] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.811 [2024-10-01 12:39:17.111902] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.811 [2024-10-01 12:39:17.112082] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.811 [2024-10-01 12:39:17.112169] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.811 [2024-10-01 12:39:17.112262] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:19:34.811 12:39:17 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.811 12:39:17 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:34.811 12:39:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:34.811 12:39:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:34.811 12:39:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:34.811 12:39:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:35.070 12:39:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:35.070 12:39:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:35.329 12:39:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:35.329 12:39:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:35.329 12:39:17 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:35.329 12:39:17 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:35.587 12:39:17 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:35.587 12:39:17 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:35.587 12:39:17 -- common/autotest_common.sh@640 -- # local es=0 00:19:35.587 12:39:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:35.587 12:39:17 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.587 12:39:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:35.588 12:39:17 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.588 12:39:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:35.588 12:39:17 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.588 12:39:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:35.588 12:39:17 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.588 12:39:17 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:35.588 12:39:17 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:35.846 [2024-10-01 12:39:18.162272] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:35.846 [2024-10-01 12:39:18.164589] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:35.846 [2024-10-01 12:39:18.164774] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:35.846 [2024-10-01 12:39:18.164863] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:35.846 [2024-10-01 12:39:18.165064] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:35.846 [2024-10-01 12:39:18.165264] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:35.847 [2024-10-01 12:39:18.165350] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:35.847 [2024-10-01 12:39:18.165427] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:19:35.847 request: 00:19:35.847 { 00:19:35.847 "name": "raid_bdev1", 00:19:35.847 "raid_level": "raid0", 00:19:35.847 "base_bdevs": [ 00:19:35.847 "malloc1", 00:19:35.847 "malloc2", 00:19:35.847 "malloc3" 00:19:35.847 ], 00:19:35.847 "superblock": false, 00:19:35.847 "strip_size_kb": 64, 00:19:35.847 "method": "bdev_raid_create", 00:19:35.847 "req_id": 1 00:19:35.847 } 00:19:35.847 Got JSON-RPC error response 00:19:35.847 response: 00:19:35.847 { 00:19:35.847 "code": -17, 00:19:35.847 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:35.847 } 00:19:35.847 12:39:18 -- common/autotest_common.sh@643 -- # es=1 00:19:35.847 12:39:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:35.847 12:39:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:35.847 12:39:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:35.847 12:39:18 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:35.847 12:39:18 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.847 12:39:18 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:35.847 12:39:18 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:35.847 12:39:18 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.106 [2024-10-01 12:39:18.529695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.106 [2024-10-01 12:39:18.529876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.106 [2024-10-01 12:39:18.529951] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:36.106 [2024-10-01 12:39:18.530037] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.106 [2024-10-01 12:39:18.532504] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.106 [2024-10-01 12:39:18.532677] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.106 [2024-10-01 12:39:18.532898] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:36.106 [2024-10-01 12:39:18.533052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.106 pt1 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.106 12:39:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.365 12:39:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.365 "name": "raid_bdev1", 00:19:36.365 "uuid": "f160be46-4e8d-44f2-923a-7ed530063ef8", 00:19:36.365 "strip_size_kb": 64, 00:19:36.365 "state": "configuring", 00:19:36.366 "raid_level": "raid0", 00:19:36.366 "superblock": true, 00:19:36.366 "num_base_bdevs": 3, 00:19:36.366 "num_base_bdevs_discovered": 1, 00:19:36.366 "num_base_bdevs_operational": 3, 00:19:36.366 "base_bdevs_list": [ 00:19:36.366 { 00:19:36.366 "name": "pt1", 00:19:36.366 "uuid": "0dc9544c-4bc4-5ddc-a041-80f87cf210ff", 00:19:36.366 "is_configured": true, 00:19:36.366 "data_offset": 2048, 00:19:36.366 "data_size": 63488 00:19:36.366 }, 00:19:36.366 { 00:19:36.366 "name": null, 00:19:36.366 "uuid": "e05ac16e-ad9e-5327-991b-24df1d8188e0", 00:19:36.366 "is_configured": false, 00:19:36.366 "data_offset": 2048, 00:19:36.366 "data_size": 63488 00:19:36.366 }, 00:19:36.366 { 00:19:36.366 "name": null, 00:19:36.366 "uuid": "7fb87969-20c4-5078-835e-830f3f9a1860", 00:19:36.366 "is_configured": false, 00:19:36.366 "data_offset": 2048, 00:19:36.366 "data_size": 63488 00:19:36.366 } 00:19:36.366 ] 00:19:36.366 }' 00:19:36.366 12:39:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.366 12:39:18 -- common/autotest_common.sh@10 -- # set +x 00:19:36.943 12:39:19 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:19:36.943 12:39:19 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.943 [2024-10-01 12:39:19.424407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.943 [2024-10-01 12:39:19.424611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.943 [2024-10-01 12:39:19.424707] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:36.943 [2024-10-01 12:39:19.424836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.943 [2024-10-01 12:39:19.425285] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.943 [2024-10-01 12:39:19.425426] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.943 [2024-10-01 12:39:19.425606] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:36.943 [2024-10-01 12:39:19.425657] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.943 pt2 00:19:36.943 12:39:19 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:37.202 [2024-10-01 12:39:19.608154] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.202 12:39:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.462 12:39:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.462 "name": "raid_bdev1", 00:19:37.462 "uuid": "f160be46-4e8d-44f2-923a-7ed530063ef8", 00:19:37.462 "strip_size_kb": 64, 00:19:37.462 "state": "configuring", 00:19:37.462 "raid_level": "raid0", 00:19:37.462 "superblock": true, 00:19:37.462 "num_base_bdevs": 3, 00:19:37.462 "num_base_bdevs_discovered": 1, 00:19:37.462 "num_base_bdevs_operational": 3, 00:19:37.462 "base_bdevs_list": [ 00:19:37.462 { 00:19:37.462 "name": "pt1", 00:19:37.462 "uuid": "0dc9544c-4bc4-5ddc-a041-80f87cf210ff", 00:19:37.462 "is_configured": true, 00:19:37.462 "data_offset": 2048, 00:19:37.462 "data_size": 63488 00:19:37.462 }, 00:19:37.462 { 00:19:37.462 "name": null, 00:19:37.462 "uuid": "e05ac16e-ad9e-5327-991b-24df1d8188e0", 00:19:37.462 "is_configured": false, 00:19:37.462 "data_offset": 2048, 00:19:37.462 "data_size": 63488 00:19:37.462 }, 00:19:37.462 { 00:19:37.462 "name": null, 00:19:37.462 "uuid": "7fb87969-20c4-5078-835e-830f3f9a1860", 00:19:37.462 "is_configured": false, 00:19:37.462 "data_offset": 2048, 00:19:37.462 "data_size": 63488 00:19:37.462 } 00:19:37.462 ] 00:19:37.462 }' 00:19:37.462 12:39:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.462 12:39:19 -- common/autotest_common.sh@10 -- # set +x 00:19:38.030 12:39:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:38.030 12:39:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:38.030 12:39:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:38.030 [2024-10-01 12:39:20.503441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:38.030 [2024-10-01 12:39:20.503712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.030 [2024-10-01 12:39:20.503786] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:38.030 [2024-10-01 12:39:20.503931] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.030 [2024-10-01 12:39:20.504509] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.030 [2024-10-01 12:39:20.504651] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:38.030 [2024-10-01 12:39:20.504865] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:38.030 [2024-10-01 12:39:20.504972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:38.030 pt2 00:19:38.030 12:39:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:38.030 12:39:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:38.030 12:39:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:38.289 [2024-10-01 12:39:20.667226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:38.289 [2024-10-01 12:39:20.667510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.289 [2024-10-01 12:39:20.667590] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:38.289 [2024-10-01 12:39:20.667704] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.289 [2024-10-01 12:39:20.668242] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.289 [2024-10-01 12:39:20.668406] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:38.290 [2024-10-01 12:39:20.668624] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:38.290 [2024-10-01 12:39:20.668767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:38.290 [2024-10-01 12:39:20.668958] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:19:38.290 [2024-10-01 12:39:20.669112] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:38.290 [2024-10-01 12:39:20.669261] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:38.290 [2024-10-01 12:39:20.669822] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:19:38.290 [2024-10-01 12:39:20.669923] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:19:38.290 [2024-10-01 12:39:20.670133] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.290 pt3 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.290 12:39:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.550 12:39:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.550 "name": "raid_bdev1", 00:19:38.550 "uuid": "f160be46-4e8d-44f2-923a-7ed530063ef8", 00:19:38.550 "strip_size_kb": 64, 00:19:38.550 "state": "online", 00:19:38.550 "raid_level": "raid0", 00:19:38.550 "superblock": true, 00:19:38.550 "num_base_bdevs": 3, 00:19:38.550 "num_base_bdevs_discovered": 3, 00:19:38.550 "num_base_bdevs_operational": 3, 00:19:38.550 "base_bdevs_list": [ 00:19:38.550 { 00:19:38.550 "name": "pt1", 00:19:38.550 "uuid": "0dc9544c-4bc4-5ddc-a041-80f87cf210ff", 00:19:38.550 "is_configured": true, 00:19:38.550 "data_offset": 2048, 00:19:38.550 "data_size": 63488 00:19:38.550 }, 00:19:38.550 { 00:19:38.550 "name": "pt2", 00:19:38.550 "uuid": "e05ac16e-ad9e-5327-991b-24df1d8188e0", 00:19:38.550 "is_configured": true, 00:19:38.550 "data_offset": 2048, 00:19:38.550 "data_size": 63488 00:19:38.550 }, 00:19:38.550 { 00:19:38.550 "name": "pt3", 00:19:38.550 "uuid": "7fb87969-20c4-5078-835e-830f3f9a1860", 00:19:38.550 "is_configured": true, 00:19:38.550 "data_offset": 2048, 00:19:38.550 "data_size": 63488 00:19:38.550 } 00:19:38.550 ] 00:19:38.550 }' 00:19:38.550 12:39:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.550 12:39:20 -- common/autotest_common.sh@10 -- # set +x 00:19:39.119 12:39:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:39.119 12:39:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:39.119 [2024-10-01 12:39:21.538078] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.119 12:39:21 -- bdev/bdev_raid.sh@430 -- # '[' f160be46-4e8d-44f2-923a-7ed530063ef8 '!=' f160be46-4e8d-44f2-923a-7ed530063ef8 ']' 00:19:39.119 12:39:21 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:19:39.119 12:39:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:39.119 12:39:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:39.119 12:39:21 -- bdev/bdev_raid.sh@511 -- # killprocess 116333 00:19:39.119 12:39:21 -- common/autotest_common.sh@926 -- # '[' -z 116333 ']' 00:19:39.119 12:39:21 -- common/autotest_common.sh@930 -- # kill -0 116333 00:19:39.119 12:39:21 -- common/autotest_common.sh@931 -- # uname 00:19:39.119 12:39:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:39.119 12:39:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116333 00:19:39.119 12:39:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:39.119 12:39:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:39.119 12:39:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116333' 00:19:39.119 killing process with pid 116333 00:19:39.119 12:39:21 -- common/autotest_common.sh@945 -- # kill 116333 00:19:39.119 [2024-10-01 12:39:21.593445] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:39.119 12:39:21 -- common/autotest_common.sh@950 -- # wait 116333 00:19:39.119 [2024-10-01 12:39:21.593609] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.119 [2024-10-01 12:39:21.593660] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.119 [2024-10-01 12:39:21.593669] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:19:39.379 [2024-10-01 12:39:21.850429] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.759 ************************************ 00:19:40.759 END TEST raid_superblock_test 00:19:40.759 ************************************ 00:19:40.759 12:39:23 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:40.759 00:19:40.759 real 0m9.205s 00:19:40.759 user 0m15.008s 00:19:40.759 sys 0m1.502s 00:19:40.759 12:39:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.759 12:39:23 -- common/autotest_common.sh@10 -- # set +x 00:19:40.759 12:39:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:40.759 12:39:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:19:40.759 12:39:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:40.760 12:39:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:40.760 12:39:23 -- common/autotest_common.sh@10 -- # set +x 00:19:40.760 ************************************ 00:19:40.760 START TEST raid_state_function_test 00:19:40.760 ************************************ 00:19:40.760 12:39:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=116634 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116634' 00:19:40.760 Process raid pid: 116634 00:19:40.760 12:39:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116634 /var/tmp/spdk-raid.sock 00:19:40.760 12:39:23 -- common/autotest_common.sh@819 -- # '[' -z 116634 ']' 00:19:40.760 12:39:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:40.760 12:39:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:40.760 12:39:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:40.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:40.760 12:39:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:40.760 12:39:23 -- common/autotest_common.sh@10 -- # set +x 00:19:40.760 [2024-10-01 12:39:23.219975] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:40.760 [2024-10-01 12:39:23.220273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.020 [2024-10-01 12:39:23.385915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.020 [2024-10-01 12:39:23.534932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.279 [2024-10-01 12:39:23.685045] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.538 12:39:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:41.538 12:39:24 -- common/autotest_common.sh@852 -- # return 0 00:19:41.538 12:39:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:41.797 [2024-10-01 12:39:24.194603] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:41.797 [2024-10-01 12:39:24.194789] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:41.797 [2024-10-01 12:39:24.194907] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:41.797 [2024-10-01 12:39:24.194958] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:41.797 [2024-10-01 12:39:24.194984] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:41.797 [2024-10-01 12:39:24.195038] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.797 12:39:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.056 12:39:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.056 "name": "Existed_Raid", 00:19:42.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.056 "strip_size_kb": 64, 00:19:42.056 "state": "configuring", 00:19:42.056 "raid_level": "concat", 00:19:42.056 "superblock": false, 00:19:42.056 "num_base_bdevs": 3, 00:19:42.056 "num_base_bdevs_discovered": 0, 00:19:42.056 "num_base_bdevs_operational": 3, 00:19:42.056 "base_bdevs_list": [ 00:19:42.056 { 00:19:42.056 "name": "BaseBdev1", 00:19:42.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.056 "is_configured": false, 00:19:42.056 "data_offset": 0, 00:19:42.056 "data_size": 0 00:19:42.056 }, 00:19:42.056 { 00:19:42.056 "name": "BaseBdev2", 00:19:42.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.056 "is_configured": false, 00:19:42.056 "data_offset": 0, 00:19:42.056 "data_size": 0 00:19:42.056 }, 00:19:42.056 { 00:19:42.056 "name": "BaseBdev3", 00:19:42.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.056 "is_configured": false, 00:19:42.056 "data_offset": 0, 00:19:42.056 "data_size": 0 00:19:42.056 } 00:19:42.056 ] 00:19:42.056 }' 00:19:42.056 12:39:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.056 12:39:24 -- common/autotest_common.sh@10 -- # set +x 00:19:42.625 12:39:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:42.625 [2024-10-01 12:39:25.101206] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:42.625 [2024-10-01 12:39:25.101390] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:42.625 12:39:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:42.884 [2024-10-01 12:39:25.292963] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:42.884 [2024-10-01 12:39:25.293123] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:42.884 [2024-10-01 12:39:25.293196] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:42.884 [2024-10-01 12:39:25.293249] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:42.884 [2024-10-01 12:39:25.293276] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:42.884 [2024-10-01 12:39:25.293319] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:42.884 12:39:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:43.144 [2024-10-01 12:39:25.495315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.144 BaseBdev1 00:19:43.144 12:39:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:43.144 12:39:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:43.144 12:39:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:43.144 12:39:25 -- common/autotest_common.sh@889 -- # local i 00:19:43.144 12:39:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:43.144 12:39:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:43.144 12:39:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.403 12:39:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:43.403 [ 00:19:43.403 { 00:19:43.403 "name": "BaseBdev1", 00:19:43.403 "aliases": [ 00:19:43.403 "1dc8bbcd-85c5-414c-a22c-1a79812e4c58" 00:19:43.403 ], 00:19:43.403 "product_name": "Malloc disk", 00:19:43.403 "block_size": 512, 00:19:43.403 "num_blocks": 65536, 00:19:43.403 "uuid": "1dc8bbcd-85c5-414c-a22c-1a79812e4c58", 00:19:43.403 "assigned_rate_limits": { 00:19:43.403 "rw_ios_per_sec": 0, 00:19:43.403 "rw_mbytes_per_sec": 0, 00:19:43.403 "r_mbytes_per_sec": 0, 00:19:43.403 "w_mbytes_per_sec": 0 00:19:43.403 }, 00:19:43.403 "claimed": true, 00:19:43.403 "claim_type": "exclusive_write", 00:19:43.403 "zoned": false, 00:19:43.403 "supported_io_types": { 00:19:43.403 "read": true, 00:19:43.403 "write": true, 00:19:43.403 "unmap": true, 00:19:43.403 "write_zeroes": true, 00:19:43.403 "flush": true, 00:19:43.403 "reset": true, 00:19:43.403 "compare": false, 00:19:43.403 "compare_and_write": false, 00:19:43.403 "abort": true, 00:19:43.403 "nvme_admin": false, 00:19:43.403 "nvme_io": false 00:19:43.403 }, 00:19:43.403 "memory_domains": [ 00:19:43.404 { 00:19:43.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.404 "dma_device_type": 2 00:19:43.404 } 00:19:43.404 ], 00:19:43.404 "driver_specific": {} 00:19:43.404 } 00:19:43.404 ] 00:19:43.404 12:39:25 -- common/autotest_common.sh@895 -- # return 0 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.404 12:39:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.663 12:39:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:43.663 "name": "Existed_Raid", 00:19:43.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.663 "strip_size_kb": 64, 00:19:43.663 "state": "configuring", 00:19:43.663 "raid_level": "concat", 00:19:43.663 "superblock": false, 00:19:43.663 "num_base_bdevs": 3, 00:19:43.663 "num_base_bdevs_discovered": 1, 00:19:43.663 "num_base_bdevs_operational": 3, 00:19:43.663 "base_bdevs_list": [ 00:19:43.663 { 00:19:43.663 "name": "BaseBdev1", 00:19:43.663 "uuid": "1dc8bbcd-85c5-414c-a22c-1a79812e4c58", 00:19:43.663 "is_configured": true, 00:19:43.663 "data_offset": 0, 00:19:43.663 "data_size": 65536 00:19:43.663 }, 00:19:43.663 { 00:19:43.663 "name": "BaseBdev2", 00:19:43.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.663 "is_configured": false, 00:19:43.663 "data_offset": 0, 00:19:43.663 "data_size": 0 00:19:43.663 }, 00:19:43.663 { 00:19:43.663 "name": "BaseBdev3", 00:19:43.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.663 "is_configured": false, 00:19:43.663 "data_offset": 0, 00:19:43.663 "data_size": 0 00:19:43.663 } 00:19:43.663 ] 00:19:43.663 }' 00:19:43.663 12:39:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:43.663 12:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.231 12:39:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:44.231 [2024-10-01 12:39:26.669651] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:44.231 [2024-10-01 12:39:26.669838] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:44.231 12:39:26 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:44.231 12:39:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:44.500 [2024-10-01 12:39:26.853428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.500 [2024-10-01 12:39:26.855349] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:44.500 [2024-10-01 12:39:26.855509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:44.500 [2024-10-01 12:39:26.855615] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:44.500 [2024-10-01 12:39:26.855670] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.500 12:39:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.759 12:39:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.759 "name": "Existed_Raid", 00:19:44.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.759 "strip_size_kb": 64, 00:19:44.759 "state": "configuring", 00:19:44.759 "raid_level": "concat", 00:19:44.759 "superblock": false, 00:19:44.759 "num_base_bdevs": 3, 00:19:44.759 "num_base_bdevs_discovered": 1, 00:19:44.759 "num_base_bdevs_operational": 3, 00:19:44.759 "base_bdevs_list": [ 00:19:44.759 { 00:19:44.759 "name": "BaseBdev1", 00:19:44.759 "uuid": "1dc8bbcd-85c5-414c-a22c-1a79812e4c58", 00:19:44.759 "is_configured": true, 00:19:44.759 "data_offset": 0, 00:19:44.759 "data_size": 65536 00:19:44.759 }, 00:19:44.759 { 00:19:44.759 "name": "BaseBdev2", 00:19:44.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.759 "is_configured": false, 00:19:44.759 "data_offset": 0, 00:19:44.759 "data_size": 0 00:19:44.759 }, 00:19:44.759 { 00:19:44.759 "name": "BaseBdev3", 00:19:44.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.759 "is_configured": false, 00:19:44.759 "data_offset": 0, 00:19:44.759 "data_size": 0 00:19:44.759 } 00:19:44.759 ] 00:19:44.759 }' 00:19:44.759 12:39:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.759 12:39:27 -- common/autotest_common.sh@10 -- # set +x 00:19:45.327 12:39:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:45.327 [2024-10-01 12:39:27.783659] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:45.327 BaseBdev2 00:19:45.327 12:39:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:45.327 12:39:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:45.327 12:39:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:45.327 12:39:27 -- common/autotest_common.sh@889 -- # local i 00:19:45.327 12:39:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:45.327 12:39:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:45.327 12:39:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:45.586 12:39:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:45.845 [ 00:19:45.845 { 00:19:45.845 "name": "BaseBdev2", 00:19:45.845 "aliases": [ 00:19:45.845 "ad848c17-84c1-4c86-b0c0-07d3c101339a" 00:19:45.845 ], 00:19:45.845 "product_name": "Malloc disk", 00:19:45.845 "block_size": 512, 00:19:45.845 "num_blocks": 65536, 00:19:45.845 "uuid": "ad848c17-84c1-4c86-b0c0-07d3c101339a", 00:19:45.845 "assigned_rate_limits": { 00:19:45.845 "rw_ios_per_sec": 0, 00:19:45.845 "rw_mbytes_per_sec": 0, 00:19:45.845 "r_mbytes_per_sec": 0, 00:19:45.845 "w_mbytes_per_sec": 0 00:19:45.845 }, 00:19:45.845 "claimed": true, 00:19:45.845 "claim_type": "exclusive_write", 00:19:45.845 "zoned": false, 00:19:45.845 "supported_io_types": { 00:19:45.845 "read": true, 00:19:45.845 "write": true, 00:19:45.845 "unmap": true, 00:19:45.845 "write_zeroes": true, 00:19:45.845 "flush": true, 00:19:45.845 "reset": true, 00:19:45.845 "compare": false, 00:19:45.845 "compare_and_write": false, 00:19:45.845 "abort": true, 00:19:45.845 "nvme_admin": false, 00:19:45.845 "nvme_io": false 00:19:45.845 }, 00:19:45.845 "memory_domains": [ 00:19:45.845 { 00:19:45.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.845 "dma_device_type": 2 00:19:45.845 } 00:19:45.845 ], 00:19:45.845 "driver_specific": {} 00:19:45.845 } 00:19:45.845 ] 00:19:45.845 12:39:28 -- common/autotest_common.sh@895 -- # return 0 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.845 "name": "Existed_Raid", 00:19:45.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.845 "strip_size_kb": 64, 00:19:45.845 "state": "configuring", 00:19:45.845 "raid_level": "concat", 00:19:45.845 "superblock": false, 00:19:45.845 "num_base_bdevs": 3, 00:19:45.845 "num_base_bdevs_discovered": 2, 00:19:45.845 "num_base_bdevs_operational": 3, 00:19:45.845 "base_bdevs_list": [ 00:19:45.845 { 00:19:45.845 "name": "BaseBdev1", 00:19:45.845 "uuid": "1dc8bbcd-85c5-414c-a22c-1a79812e4c58", 00:19:45.845 "is_configured": true, 00:19:45.845 "data_offset": 0, 00:19:45.845 "data_size": 65536 00:19:45.845 }, 00:19:45.845 { 00:19:45.845 "name": "BaseBdev2", 00:19:45.845 "uuid": "ad848c17-84c1-4c86-b0c0-07d3c101339a", 00:19:45.845 "is_configured": true, 00:19:45.845 "data_offset": 0, 00:19:45.845 "data_size": 65536 00:19:45.845 }, 00:19:45.845 { 00:19:45.845 "name": "BaseBdev3", 00:19:45.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.845 "is_configured": false, 00:19:45.845 "data_offset": 0, 00:19:45.845 "data_size": 0 00:19:45.845 } 00:19:45.845 ] 00:19:45.845 }' 00:19:45.845 12:39:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.845 12:39:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.413 12:39:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:46.673 [2024-10-01 12:39:29.062429] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.673 [2024-10-01 12:39:29.062613] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:19:46.673 [2024-10-01 12:39:29.062654] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:46.673 [2024-10-01 12:39:29.062833] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:46.673 [2024-10-01 12:39:29.063236] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:19:46.673 [2024-10-01 12:39:29.063342] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:19:46.673 [2024-10-01 12:39:29.063647] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.673 BaseBdev3 00:19:46.673 12:39:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:46.673 12:39:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:46.673 12:39:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:46.673 12:39:29 -- common/autotest_common.sh@889 -- # local i 00:19:46.673 12:39:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:46.673 12:39:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:46.673 12:39:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:46.932 12:39:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:46.932 [ 00:19:46.932 { 00:19:46.932 "name": "BaseBdev3", 00:19:46.932 "aliases": [ 00:19:46.932 "74f30ec8-44be-4037-87e6-d13ab578f1a9" 00:19:46.932 ], 00:19:46.932 "product_name": "Malloc disk", 00:19:46.932 "block_size": 512, 00:19:46.932 "num_blocks": 65536, 00:19:46.932 "uuid": "74f30ec8-44be-4037-87e6-d13ab578f1a9", 00:19:46.932 "assigned_rate_limits": { 00:19:46.932 "rw_ios_per_sec": 0, 00:19:46.932 "rw_mbytes_per_sec": 0, 00:19:46.932 "r_mbytes_per_sec": 0, 00:19:46.932 "w_mbytes_per_sec": 0 00:19:46.932 }, 00:19:46.932 "claimed": true, 00:19:46.932 "claim_type": "exclusive_write", 00:19:46.932 "zoned": false, 00:19:46.932 "supported_io_types": { 00:19:46.932 "read": true, 00:19:46.932 "write": true, 00:19:46.932 "unmap": true, 00:19:46.932 "write_zeroes": true, 00:19:46.932 "flush": true, 00:19:46.932 "reset": true, 00:19:46.932 "compare": false, 00:19:46.932 "compare_and_write": false, 00:19:46.932 "abort": true, 00:19:46.932 "nvme_admin": false, 00:19:46.932 "nvme_io": false 00:19:46.932 }, 00:19:46.932 "memory_domains": [ 00:19:46.932 { 00:19:46.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.932 "dma_device_type": 2 00:19:46.932 } 00:19:46.932 ], 00:19:46.932 "driver_specific": {} 00:19:46.932 } 00:19:46.932 ] 00:19:47.194 12:39:29 -- common/autotest_common.sh@895 -- # return 0 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.194 "name": "Existed_Raid", 00:19:47.194 "uuid": "c24b2a89-4252-4a28-a4dc-c0d89e5614c8", 00:19:47.194 "strip_size_kb": 64, 00:19:47.194 "state": "online", 00:19:47.194 "raid_level": "concat", 00:19:47.194 "superblock": false, 00:19:47.194 "num_base_bdevs": 3, 00:19:47.194 "num_base_bdevs_discovered": 3, 00:19:47.194 "num_base_bdevs_operational": 3, 00:19:47.194 "base_bdevs_list": [ 00:19:47.194 { 00:19:47.194 "name": "BaseBdev1", 00:19:47.194 "uuid": "1dc8bbcd-85c5-414c-a22c-1a79812e4c58", 00:19:47.194 "is_configured": true, 00:19:47.194 "data_offset": 0, 00:19:47.194 "data_size": 65536 00:19:47.194 }, 00:19:47.194 { 00:19:47.194 "name": "BaseBdev2", 00:19:47.194 "uuid": "ad848c17-84c1-4c86-b0c0-07d3c101339a", 00:19:47.194 "is_configured": true, 00:19:47.194 "data_offset": 0, 00:19:47.194 "data_size": 65536 00:19:47.194 }, 00:19:47.194 { 00:19:47.194 "name": "BaseBdev3", 00:19:47.194 "uuid": "74f30ec8-44be-4037-87e6-d13ab578f1a9", 00:19:47.194 "is_configured": true, 00:19:47.194 "data_offset": 0, 00:19:47.194 "data_size": 65536 00:19:47.194 } 00:19:47.194 ] 00:19:47.194 }' 00:19:47.194 12:39:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.194 12:39:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.792 12:39:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:48.050 [2024-10-01 12:39:30.360647] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:48.050 [2024-10-01 12:39:30.360808] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.050 [2024-10-01 12:39:30.360951] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.050 12:39:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.309 12:39:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.309 "name": "Existed_Raid", 00:19:48.309 "uuid": "c24b2a89-4252-4a28-a4dc-c0d89e5614c8", 00:19:48.309 "strip_size_kb": 64, 00:19:48.309 "state": "offline", 00:19:48.309 "raid_level": "concat", 00:19:48.309 "superblock": false, 00:19:48.309 "num_base_bdevs": 3, 00:19:48.309 "num_base_bdevs_discovered": 2, 00:19:48.309 "num_base_bdevs_operational": 2, 00:19:48.309 "base_bdevs_list": [ 00:19:48.309 { 00:19:48.309 "name": null, 00:19:48.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.309 "is_configured": false, 00:19:48.309 "data_offset": 0, 00:19:48.309 "data_size": 65536 00:19:48.309 }, 00:19:48.309 { 00:19:48.309 "name": "BaseBdev2", 00:19:48.309 "uuid": "ad848c17-84c1-4c86-b0c0-07d3c101339a", 00:19:48.309 "is_configured": true, 00:19:48.309 "data_offset": 0, 00:19:48.309 "data_size": 65536 00:19:48.309 }, 00:19:48.309 { 00:19:48.309 "name": "BaseBdev3", 00:19:48.309 "uuid": "74f30ec8-44be-4037-87e6-d13ab578f1a9", 00:19:48.309 "is_configured": true, 00:19:48.309 "data_offset": 0, 00:19:48.309 "data_size": 65536 00:19:48.309 } 00:19:48.309 ] 00:19:48.309 }' 00:19:48.309 12:39:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.309 12:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:48.876 12:39:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:48.876 12:39:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:48.876 12:39:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.876 12:39:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:48.876 12:39:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:48.876 12:39:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:48.876 12:39:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:49.135 [2024-10-01 12:39:31.531049] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:49.135 12:39:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:49.135 12:39:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:49.135 12:39:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.135 12:39:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:49.394 12:39:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:49.394 12:39:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:49.394 12:39:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:49.652 [2024-10-01 12:39:31.983613] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:49.652 [2024-10-01 12:39:31.983795] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:19:49.652 12:39:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:49.652 12:39:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:49.652 12:39:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.652 12:39:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:49.911 12:39:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:49.911 12:39:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:49.911 12:39:32 -- bdev/bdev_raid.sh@287 -- # killprocess 116634 00:19:49.911 12:39:32 -- common/autotest_common.sh@926 -- # '[' -z 116634 ']' 00:19:49.911 12:39:32 -- common/autotest_common.sh@930 -- # kill -0 116634 00:19:49.911 12:39:32 -- common/autotest_common.sh@931 -- # uname 00:19:49.911 12:39:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:49.911 12:39:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116634 00:19:49.911 12:39:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:49.911 12:39:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:49.911 12:39:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116634' 00:19:49.911 killing process with pid 116634 00:19:49.911 12:39:32 -- common/autotest_common.sh@945 -- # kill 116634 00:19:49.911 [2024-10-01 12:39:32.319746] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.911 12:39:32 -- common/autotest_common.sh@950 -- # wait 116634 00:19:49.911 [2024-10-01 12:39:32.319950] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:51.288 00:19:51.288 real 0m10.382s 00:19:51.288 user 0m17.477s 00:19:51.288 sys 0m1.590s 00:19:51.288 12:39:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.288 12:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:51.288 ************************************ 00:19:51.288 END TEST raid_state_function_test 00:19:51.288 ************************************ 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:19:51.288 12:39:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:51.288 12:39:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:51.288 12:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:51.288 ************************************ 00:19:51.288 START TEST raid_state_function_test_sb 00:19:51.288 ************************************ 00:19:51.288 12:39:33 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:51.288 12:39:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:51.289 12:39:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:51.289 12:39:33 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:51.289 12:39:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:51.289 12:39:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:51.289 12:39:33 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:51.289 12:39:33 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:51.289 12:39:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=116990 00:19:51.289 12:39:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:51.289 12:39:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116990' 00:19:51.289 Process raid pid: 116990 00:19:51.289 12:39:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116990 /var/tmp/spdk-raid.sock 00:19:51.289 12:39:33 -- common/autotest_common.sh@819 -- # '[' -z 116990 ']' 00:19:51.289 12:39:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:51.289 12:39:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:51.289 12:39:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:51.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:51.289 12:39:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:51.289 12:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:51.289 [2024-10-01 12:39:33.714994] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:51.289 [2024-10-01 12:39:33.715302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.547 [2024-10-01 12:39:33.883180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.547 [2024-10-01 12:39:34.030600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.806 [2024-10-01 12:39:34.180299] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.064 12:39:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:52.064 12:39:34 -- common/autotest_common.sh@852 -- # return 0 00:19:52.064 12:39:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:52.323 [2024-10-01 12:39:34.675811] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:52.323 [2024-10-01 12:39:34.676048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:52.323 [2024-10-01 12:39:34.676140] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:52.323 [2024-10-01 12:39:34.676191] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:52.323 [2024-10-01 12:39:34.676216] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:52.323 [2024-10-01 12:39:34.676272] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.323 12:39:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.581 12:39:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.581 "name": "Existed_Raid", 00:19:52.581 "uuid": "77598298-840c-4e92-8537-6ee91804a872", 00:19:52.581 "strip_size_kb": 64, 00:19:52.581 "state": "configuring", 00:19:52.581 "raid_level": "concat", 00:19:52.581 "superblock": true, 00:19:52.581 "num_base_bdevs": 3, 00:19:52.581 "num_base_bdevs_discovered": 0, 00:19:52.581 "num_base_bdevs_operational": 3, 00:19:52.581 "base_bdevs_list": [ 00:19:52.581 { 00:19:52.581 "name": "BaseBdev1", 00:19:52.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.581 "is_configured": false, 00:19:52.581 "data_offset": 0, 00:19:52.581 "data_size": 0 00:19:52.581 }, 00:19:52.581 { 00:19:52.581 "name": "BaseBdev2", 00:19:52.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.581 "is_configured": false, 00:19:52.581 "data_offset": 0, 00:19:52.581 "data_size": 0 00:19:52.581 }, 00:19:52.581 { 00:19:52.581 "name": "BaseBdev3", 00:19:52.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.581 "is_configured": false, 00:19:52.581 "data_offset": 0, 00:19:52.581 "data_size": 0 00:19:52.581 } 00:19:52.581 ] 00:19:52.581 }' 00:19:52.581 12:39:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.581 12:39:34 -- common/autotest_common.sh@10 -- # set +x 00:19:53.149 12:39:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:53.149 [2024-10-01 12:39:35.554622] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:53.149 [2024-10-01 12:39:35.554782] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:53.149 12:39:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:53.408 [2024-10-01 12:39:35.738413] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:53.408 [2024-10-01 12:39:35.738594] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:53.408 [2024-10-01 12:39:35.738703] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:53.408 [2024-10-01 12:39:35.738760] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:53.408 [2024-10-01 12:39:35.738786] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:53.408 [2024-10-01 12:39:35.738830] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:53.408 12:39:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:53.408 [2024-10-01 12:39:35.916561] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:53.408 BaseBdev1 00:19:53.408 12:39:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:53.408 12:39:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:53.408 12:39:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:53.408 12:39:35 -- common/autotest_common.sh@889 -- # local i 00:19:53.408 12:39:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:53.408 12:39:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:53.408 12:39:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:53.667 12:39:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:53.926 [ 00:19:53.926 { 00:19:53.926 "name": "BaseBdev1", 00:19:53.926 "aliases": [ 00:19:53.926 "0c080d2b-1696-4d97-927e-f04668044c5d" 00:19:53.926 ], 00:19:53.926 "product_name": "Malloc disk", 00:19:53.926 "block_size": 512, 00:19:53.926 "num_blocks": 65536, 00:19:53.926 "uuid": "0c080d2b-1696-4d97-927e-f04668044c5d", 00:19:53.926 "assigned_rate_limits": { 00:19:53.926 "rw_ios_per_sec": 0, 00:19:53.926 "rw_mbytes_per_sec": 0, 00:19:53.926 "r_mbytes_per_sec": 0, 00:19:53.926 "w_mbytes_per_sec": 0 00:19:53.926 }, 00:19:53.926 "claimed": true, 00:19:53.926 "claim_type": "exclusive_write", 00:19:53.926 "zoned": false, 00:19:53.926 "supported_io_types": { 00:19:53.926 "read": true, 00:19:53.926 "write": true, 00:19:53.926 "unmap": true, 00:19:53.926 "write_zeroes": true, 00:19:53.926 "flush": true, 00:19:53.926 "reset": true, 00:19:53.926 "compare": false, 00:19:53.926 "compare_and_write": false, 00:19:53.926 "abort": true, 00:19:53.926 "nvme_admin": false, 00:19:53.926 "nvme_io": false 00:19:53.926 }, 00:19:53.926 "memory_domains": [ 00:19:53.926 { 00:19:53.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.926 "dma_device_type": 2 00:19:53.926 } 00:19:53.926 ], 00:19:53.926 "driver_specific": {} 00:19:53.926 } 00:19:53.926 ] 00:19:53.926 12:39:36 -- common/autotest_common.sh@895 -- # return 0 00:19:53.926 12:39:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.927 12:39:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.186 12:39:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:54.186 "name": "Existed_Raid", 00:19:54.186 "uuid": "843b05ac-62fe-4fc4-9b27-463fffdf3cec", 00:19:54.186 "strip_size_kb": 64, 00:19:54.186 "state": "configuring", 00:19:54.186 "raid_level": "concat", 00:19:54.186 "superblock": true, 00:19:54.186 "num_base_bdevs": 3, 00:19:54.186 "num_base_bdevs_discovered": 1, 00:19:54.186 "num_base_bdevs_operational": 3, 00:19:54.186 "base_bdevs_list": [ 00:19:54.186 { 00:19:54.186 "name": "BaseBdev1", 00:19:54.186 "uuid": "0c080d2b-1696-4d97-927e-f04668044c5d", 00:19:54.186 "is_configured": true, 00:19:54.186 "data_offset": 2048, 00:19:54.186 "data_size": 63488 00:19:54.186 }, 00:19:54.186 { 00:19:54.186 "name": "BaseBdev2", 00:19:54.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.186 "is_configured": false, 00:19:54.186 "data_offset": 0, 00:19:54.186 "data_size": 0 00:19:54.186 }, 00:19:54.186 { 00:19:54.186 "name": "BaseBdev3", 00:19:54.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.186 "is_configured": false, 00:19:54.186 "data_offset": 0, 00:19:54.186 "data_size": 0 00:19:54.186 } 00:19:54.186 ] 00:19:54.186 }' 00:19:54.186 12:39:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:54.186 12:39:36 -- common/autotest_common.sh@10 -- # set +x 00:19:54.753 12:39:37 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:54.753 [2024-10-01 12:39:37.166780] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:54.753 [2024-10-01 12:39:37.166974] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:54.754 12:39:37 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:54.754 12:39:37 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:55.013 12:39:37 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:55.272 BaseBdev1 00:19:55.272 12:39:37 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:55.272 12:39:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:55.272 12:39:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:55.272 12:39:37 -- common/autotest_common.sh@889 -- # local i 00:19:55.272 12:39:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:55.272 12:39:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:55.272 12:39:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:55.530 12:39:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:55.530 [ 00:19:55.531 { 00:19:55.531 "name": "BaseBdev1", 00:19:55.531 "aliases": [ 00:19:55.531 "6f6cd0e1-ee58-4bb1-8074-c76ce68a76c4" 00:19:55.531 ], 00:19:55.531 "product_name": "Malloc disk", 00:19:55.531 "block_size": 512, 00:19:55.531 "num_blocks": 65536, 00:19:55.531 "uuid": "6f6cd0e1-ee58-4bb1-8074-c76ce68a76c4", 00:19:55.531 "assigned_rate_limits": { 00:19:55.531 "rw_ios_per_sec": 0, 00:19:55.531 "rw_mbytes_per_sec": 0, 00:19:55.531 "r_mbytes_per_sec": 0, 00:19:55.531 "w_mbytes_per_sec": 0 00:19:55.531 }, 00:19:55.531 "claimed": false, 00:19:55.531 "zoned": false, 00:19:55.531 "supported_io_types": { 00:19:55.531 "read": true, 00:19:55.531 "write": true, 00:19:55.531 "unmap": true, 00:19:55.531 "write_zeroes": true, 00:19:55.531 "flush": true, 00:19:55.531 "reset": true, 00:19:55.531 "compare": false, 00:19:55.531 "compare_and_write": false, 00:19:55.531 "abort": true, 00:19:55.531 "nvme_admin": false, 00:19:55.531 "nvme_io": false 00:19:55.531 }, 00:19:55.531 "memory_domains": [ 00:19:55.531 { 00:19:55.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.531 "dma_device_type": 2 00:19:55.531 } 00:19:55.531 ], 00:19:55.531 "driver_specific": {} 00:19:55.531 } 00:19:55.531 ] 00:19:55.531 12:39:37 -- common/autotest_common.sh@895 -- # return 0 00:19:55.531 12:39:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:55.790 [2024-10-01 12:39:38.157373] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.790 [2024-10-01 12:39:38.159278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:55.790 [2024-10-01 12:39:38.159440] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:55.790 [2024-10-01 12:39:38.159562] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:55.790 [2024-10-01 12:39:38.159620] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.790 12:39:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.049 12:39:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.049 "name": "Existed_Raid", 00:19:56.049 "uuid": "60f9967d-07d9-4f72-a00d-a3af21c7a6d6", 00:19:56.049 "strip_size_kb": 64, 00:19:56.049 "state": "configuring", 00:19:56.049 "raid_level": "concat", 00:19:56.049 "superblock": true, 00:19:56.049 "num_base_bdevs": 3, 00:19:56.049 "num_base_bdevs_discovered": 1, 00:19:56.049 "num_base_bdevs_operational": 3, 00:19:56.049 "base_bdevs_list": [ 00:19:56.049 { 00:19:56.049 "name": "BaseBdev1", 00:19:56.049 "uuid": "6f6cd0e1-ee58-4bb1-8074-c76ce68a76c4", 00:19:56.049 "is_configured": true, 00:19:56.049 "data_offset": 2048, 00:19:56.049 "data_size": 63488 00:19:56.049 }, 00:19:56.049 { 00:19:56.049 "name": "BaseBdev2", 00:19:56.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.049 "is_configured": false, 00:19:56.049 "data_offset": 0, 00:19:56.049 "data_size": 0 00:19:56.049 }, 00:19:56.049 { 00:19:56.049 "name": "BaseBdev3", 00:19:56.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.049 "is_configured": false, 00:19:56.049 "data_offset": 0, 00:19:56.049 "data_size": 0 00:19:56.049 } 00:19:56.049 ] 00:19:56.049 }' 00:19:56.049 12:39:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.049 12:39:38 -- common/autotest_common.sh@10 -- # set +x 00:19:56.617 12:39:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:56.617 [2024-10-01 12:39:39.069623] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:56.617 BaseBdev2 00:19:56.617 12:39:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:56.617 12:39:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:56.617 12:39:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:56.617 12:39:39 -- common/autotest_common.sh@889 -- # local i 00:19:56.617 12:39:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:56.617 12:39:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:56.617 12:39:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:56.876 12:39:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:57.134 [ 00:19:57.134 { 00:19:57.134 "name": "BaseBdev2", 00:19:57.134 "aliases": [ 00:19:57.134 "a464eb6f-905e-4784-8c9e-b9e1dba80d33" 00:19:57.134 ], 00:19:57.134 "product_name": "Malloc disk", 00:19:57.134 "block_size": 512, 00:19:57.134 "num_blocks": 65536, 00:19:57.134 "uuid": "a464eb6f-905e-4784-8c9e-b9e1dba80d33", 00:19:57.134 "assigned_rate_limits": { 00:19:57.134 "rw_ios_per_sec": 0, 00:19:57.134 "rw_mbytes_per_sec": 0, 00:19:57.134 "r_mbytes_per_sec": 0, 00:19:57.134 "w_mbytes_per_sec": 0 00:19:57.134 }, 00:19:57.134 "claimed": true, 00:19:57.134 "claim_type": "exclusive_write", 00:19:57.134 "zoned": false, 00:19:57.134 "supported_io_types": { 00:19:57.134 "read": true, 00:19:57.134 "write": true, 00:19:57.134 "unmap": true, 00:19:57.134 "write_zeroes": true, 00:19:57.134 "flush": true, 00:19:57.134 "reset": true, 00:19:57.134 "compare": false, 00:19:57.134 "compare_and_write": false, 00:19:57.135 "abort": true, 00:19:57.135 "nvme_admin": false, 00:19:57.135 "nvme_io": false 00:19:57.135 }, 00:19:57.135 "memory_domains": [ 00:19:57.135 { 00:19:57.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.135 "dma_device_type": 2 00:19:57.135 } 00:19:57.135 ], 00:19:57.135 "driver_specific": {} 00:19:57.135 } 00:19:57.135 ] 00:19:57.135 12:39:39 -- common/autotest_common.sh@895 -- # return 0 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:57.135 "name": "Existed_Raid", 00:19:57.135 "uuid": "60f9967d-07d9-4f72-a00d-a3af21c7a6d6", 00:19:57.135 "strip_size_kb": 64, 00:19:57.135 "state": "configuring", 00:19:57.135 "raid_level": "concat", 00:19:57.135 "superblock": true, 00:19:57.135 "num_base_bdevs": 3, 00:19:57.135 "num_base_bdevs_discovered": 2, 00:19:57.135 "num_base_bdevs_operational": 3, 00:19:57.135 "base_bdevs_list": [ 00:19:57.135 { 00:19:57.135 "name": "BaseBdev1", 00:19:57.135 "uuid": "6f6cd0e1-ee58-4bb1-8074-c76ce68a76c4", 00:19:57.135 "is_configured": true, 00:19:57.135 "data_offset": 2048, 00:19:57.135 "data_size": 63488 00:19:57.135 }, 00:19:57.135 { 00:19:57.135 "name": "BaseBdev2", 00:19:57.135 "uuid": "a464eb6f-905e-4784-8c9e-b9e1dba80d33", 00:19:57.135 "is_configured": true, 00:19:57.135 "data_offset": 2048, 00:19:57.135 "data_size": 63488 00:19:57.135 }, 00:19:57.135 { 00:19:57.135 "name": "BaseBdev3", 00:19:57.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.135 "is_configured": false, 00:19:57.135 "data_offset": 0, 00:19:57.135 "data_size": 0 00:19:57.135 } 00:19:57.135 ] 00:19:57.135 }' 00:19:57.135 12:39:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:57.135 12:39:39 -- common/autotest_common.sh@10 -- # set +x 00:19:57.703 12:39:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:57.962 [2024-10-01 12:39:40.320162] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:57.962 [2024-10-01 12:39:40.320549] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:57.962 [2024-10-01 12:39:40.320598] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:57.962 [2024-10-01 12:39:40.320827] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:57.962 [2024-10-01 12:39:40.321131] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:57.962 [2024-10-01 12:39:40.321176] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:57.962 [2024-10-01 12:39:40.321406] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.962 BaseBdev3 00:19:57.962 12:39:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:57.962 12:39:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:57.962 12:39:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:57.962 12:39:40 -- common/autotest_common.sh@889 -- # local i 00:19:57.962 12:39:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:57.962 12:39:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:57.962 12:39:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.223 12:39:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:58.223 [ 00:19:58.223 { 00:19:58.223 "name": "BaseBdev3", 00:19:58.223 "aliases": [ 00:19:58.223 "aad50da3-e3f6-4567-8ad7-efc45d0730ff" 00:19:58.223 ], 00:19:58.223 "product_name": "Malloc disk", 00:19:58.223 "block_size": 512, 00:19:58.223 "num_blocks": 65536, 00:19:58.223 "uuid": "aad50da3-e3f6-4567-8ad7-efc45d0730ff", 00:19:58.223 "assigned_rate_limits": { 00:19:58.223 "rw_ios_per_sec": 0, 00:19:58.223 "rw_mbytes_per_sec": 0, 00:19:58.223 "r_mbytes_per_sec": 0, 00:19:58.223 "w_mbytes_per_sec": 0 00:19:58.223 }, 00:19:58.223 "claimed": true, 00:19:58.223 "claim_type": "exclusive_write", 00:19:58.223 "zoned": false, 00:19:58.223 "supported_io_types": { 00:19:58.223 "read": true, 00:19:58.223 "write": true, 00:19:58.223 "unmap": true, 00:19:58.223 "write_zeroes": true, 00:19:58.223 "flush": true, 00:19:58.223 "reset": true, 00:19:58.223 "compare": false, 00:19:58.223 "compare_and_write": false, 00:19:58.223 "abort": true, 00:19:58.223 "nvme_admin": false, 00:19:58.223 "nvme_io": false 00:19:58.223 }, 00:19:58.223 "memory_domains": [ 00:19:58.223 { 00:19:58.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.223 "dma_device_type": 2 00:19:58.223 } 00:19:58.223 ], 00:19:58.223 "driver_specific": {} 00:19:58.223 } 00:19:58.223 ] 00:19:58.223 12:39:40 -- common/autotest_common.sh@895 -- # return 0 00:19:58.223 12:39:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:58.223 12:39:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:58.223 12:39:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:58.223 12:39:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:58.223 12:39:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:58.223 12:39:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:58.223 12:39:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:58.223 12:39:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:58.223 12:39:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.224 12:39:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.224 12:39:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.224 12:39:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.224 12:39:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.224 12:39:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.484 12:39:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.484 "name": "Existed_Raid", 00:19:58.484 "uuid": "60f9967d-07d9-4f72-a00d-a3af21c7a6d6", 00:19:58.484 "strip_size_kb": 64, 00:19:58.484 "state": "online", 00:19:58.484 "raid_level": "concat", 00:19:58.484 "superblock": true, 00:19:58.484 "num_base_bdevs": 3, 00:19:58.484 "num_base_bdevs_discovered": 3, 00:19:58.484 "num_base_bdevs_operational": 3, 00:19:58.484 "base_bdevs_list": [ 00:19:58.484 { 00:19:58.484 "name": "BaseBdev1", 00:19:58.484 "uuid": "6f6cd0e1-ee58-4bb1-8074-c76ce68a76c4", 00:19:58.484 "is_configured": true, 00:19:58.484 "data_offset": 2048, 00:19:58.484 "data_size": 63488 00:19:58.484 }, 00:19:58.484 { 00:19:58.484 "name": "BaseBdev2", 00:19:58.484 "uuid": "a464eb6f-905e-4784-8c9e-b9e1dba80d33", 00:19:58.484 "is_configured": true, 00:19:58.484 "data_offset": 2048, 00:19:58.484 "data_size": 63488 00:19:58.484 }, 00:19:58.484 { 00:19:58.484 "name": "BaseBdev3", 00:19:58.484 "uuid": "aad50da3-e3f6-4567-8ad7-efc45d0730ff", 00:19:58.484 "is_configured": true, 00:19:58.484 "data_offset": 2048, 00:19:58.484 "data_size": 63488 00:19:58.484 } 00:19:58.484 ] 00:19:58.484 }' 00:19:58.484 12:39:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.484 12:39:40 -- common/autotest_common.sh@10 -- # set +x 00:19:59.052 12:39:41 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:59.311 [2024-10-01 12:39:41.602504] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:59.311 [2024-10-01 12:39:41.602661] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:59.312 [2024-10-01 12:39:41.602795] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.312 12:39:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.571 12:39:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:59.571 "name": "Existed_Raid", 00:19:59.571 "uuid": "60f9967d-07d9-4f72-a00d-a3af21c7a6d6", 00:19:59.571 "strip_size_kb": 64, 00:19:59.571 "state": "offline", 00:19:59.571 "raid_level": "concat", 00:19:59.571 "superblock": true, 00:19:59.571 "num_base_bdevs": 3, 00:19:59.571 "num_base_bdevs_discovered": 2, 00:19:59.571 "num_base_bdevs_operational": 2, 00:19:59.571 "base_bdevs_list": [ 00:19:59.571 { 00:19:59.571 "name": null, 00:19:59.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.571 "is_configured": false, 00:19:59.571 "data_offset": 2048, 00:19:59.571 "data_size": 63488 00:19:59.571 }, 00:19:59.571 { 00:19:59.571 "name": "BaseBdev2", 00:19:59.571 "uuid": "a464eb6f-905e-4784-8c9e-b9e1dba80d33", 00:19:59.571 "is_configured": true, 00:19:59.571 "data_offset": 2048, 00:19:59.571 "data_size": 63488 00:19:59.571 }, 00:19:59.571 { 00:19:59.571 "name": "BaseBdev3", 00:19:59.571 "uuid": "aad50da3-e3f6-4567-8ad7-efc45d0730ff", 00:19:59.571 "is_configured": true, 00:19:59.571 "data_offset": 2048, 00:19:59.571 "data_size": 63488 00:19:59.571 } 00:19:59.571 ] 00:19:59.571 }' 00:19:59.571 12:39:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:59.571 12:39:41 -- common/autotest_common.sh@10 -- # set +x 00:20:00.139 12:39:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:00.139 12:39:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:00.139 12:39:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.139 12:39:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:00.139 12:39:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:00.139 12:39:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:00.139 12:39:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:00.399 [2024-10-01 12:39:42.775330] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:00.399 12:39:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:00.399 12:39:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:00.399 12:39:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.399 12:39:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:00.658 12:39:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:00.658 12:39:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:00.658 12:39:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:00.917 [2024-10-01 12:39:43.228403] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:00.917 [2024-10-01 12:39:43.228619] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:20:00.917 12:39:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:00.917 12:39:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:00.917 12:39:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.917 12:39:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:01.176 12:39:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:01.176 12:39:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:01.176 12:39:43 -- bdev/bdev_raid.sh@287 -- # killprocess 116990 00:20:01.176 12:39:43 -- common/autotest_common.sh@926 -- # '[' -z 116990 ']' 00:20:01.176 12:39:43 -- common/autotest_common.sh@930 -- # kill -0 116990 00:20:01.176 12:39:43 -- common/autotest_common.sh@931 -- # uname 00:20:01.176 12:39:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:01.176 12:39:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116990 00:20:01.176 12:39:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:01.176 12:39:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:01.176 12:39:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116990' 00:20:01.176 killing process with pid 116990 00:20:01.176 12:39:43 -- common/autotest_common.sh@945 -- # kill 116990 00:20:01.176 12:39:43 -- common/autotest_common.sh@950 -- # wait 116990 00:20:01.176 [2024-10-01 12:39:43.543485] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.176 [2024-10-01 12:39:43.543827] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:02.555 00:20:02.555 real 0m11.103s 00:20:02.555 user 0m18.596s 00:20:02.555 sys 0m1.765s 00:20:02.555 12:39:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.555 12:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.555 ************************************ 00:20:02.555 END TEST raid_state_function_test_sb 00:20:02.555 ************************************ 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:20:02.555 12:39:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:02.555 12:39:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.555 12:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.555 ************************************ 00:20:02.555 START TEST raid_superblock_test 00:20:02.555 ************************************ 00:20:02.555 12:39:44 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@357 -- # raid_pid=117363 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:02.555 12:39:44 -- bdev/bdev_raid.sh@358 -- # waitforlisten 117363 /var/tmp/spdk-raid.sock 00:20:02.555 12:39:44 -- common/autotest_common.sh@819 -- # '[' -z 117363 ']' 00:20:02.555 12:39:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:02.555 12:39:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:02.555 12:39:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:02.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:02.555 12:39:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:02.555 12:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.555 [2024-10-01 12:39:44.900680] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:02.555 [2024-10-01 12:39:44.900992] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117363 ] 00:20:02.555 [2024-10-01 12:39:45.067530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.815 [2024-10-01 12:39:45.217068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.073 [2024-10-01 12:39:45.361142] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.332 12:39:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:03.333 12:39:45 -- common/autotest_common.sh@852 -- # return 0 00:20:03.333 12:39:45 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:03.333 12:39:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:03.333 12:39:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:03.333 12:39:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:03.333 12:39:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:03.333 12:39:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:03.333 12:39:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:03.333 12:39:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:03.333 12:39:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:03.591 malloc1 00:20:03.591 12:39:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:03.591 [2024-10-01 12:39:46.065979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:03.591 [2024-10-01 12:39:46.066598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.591 [2024-10-01 12:39:46.066990] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:03.591 [2024-10-01 12:39:46.067342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.591 [2024-10-01 12:39:46.074229] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.591 [2024-10-01 12:39:46.074619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:03.591 pt1 00:20:03.591 12:39:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:03.592 12:39:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:03.592 12:39:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:03.592 12:39:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:03.592 12:39:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:03.592 12:39:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:03.592 12:39:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:03.592 12:39:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:03.592 12:39:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:03.850 malloc2 00:20:03.850 12:39:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:04.109 [2024-10-01 12:39:46.515279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:04.109 [2024-10-01 12:39:46.515476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.109 [2024-10-01 12:39:46.515544] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:04.109 [2024-10-01 12:39:46.515672] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.109 [2024-10-01 12:39:46.517839] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.109 [2024-10-01 12:39:46.518002] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:04.109 pt2 00:20:04.109 12:39:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:04.109 12:39:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:04.109 12:39:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:04.109 12:39:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:04.109 12:39:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:04.109 12:39:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:04.109 12:39:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:04.109 12:39:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:04.109 12:39:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:04.368 malloc3 00:20:04.368 12:39:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:04.626 [2024-10-01 12:39:46.946325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:04.626 [2024-10-01 12:39:46.946574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.626 [2024-10-01 12:39:46.946653] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:04.626 [2024-10-01 12:39:46.946779] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.626 [2024-10-01 12:39:46.949203] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.626 [2024-10-01 12:39:46.949384] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:04.626 pt3 00:20:04.626 12:39:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:04.626 12:39:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:04.626 12:39:46 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:04.626 [2024-10-01 12:39:47.122091] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:04.626 [2024-10-01 12:39:47.124243] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:04.626 [2024-10-01 12:39:47.124427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:04.626 [2024-10-01 12:39:47.124612] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:20:04.626 [2024-10-01 12:39:47.124790] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:04.626 [2024-10-01 12:39:47.124933] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:04.626 [2024-10-01 12:39:47.125378] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:20:04.626 [2024-10-01 12:39:47.125489] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:20:04.626 [2024-10-01 12:39:47.125713] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.626 12:39:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.884 12:39:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:04.884 "name": "raid_bdev1", 00:20:04.884 "uuid": "c422139a-9dbb-4a24-ab7f-c5ae70953371", 00:20:04.884 "strip_size_kb": 64, 00:20:04.884 "state": "online", 00:20:04.884 "raid_level": "concat", 00:20:04.884 "superblock": true, 00:20:04.884 "num_base_bdevs": 3, 00:20:04.884 "num_base_bdevs_discovered": 3, 00:20:04.884 "num_base_bdevs_operational": 3, 00:20:04.884 "base_bdevs_list": [ 00:20:04.884 { 00:20:04.884 "name": "pt1", 00:20:04.884 "uuid": "13569828-7458-5594-a07a-195193e5d394", 00:20:04.884 "is_configured": true, 00:20:04.884 "data_offset": 2048, 00:20:04.884 "data_size": 63488 00:20:04.884 }, 00:20:04.884 { 00:20:04.884 "name": "pt2", 00:20:04.884 "uuid": "63cca392-f1fc-59d4-9c75-902e279137d6", 00:20:04.884 "is_configured": true, 00:20:04.884 "data_offset": 2048, 00:20:04.884 "data_size": 63488 00:20:04.884 }, 00:20:04.884 { 00:20:04.884 "name": "pt3", 00:20:04.884 "uuid": "0b84de62-2eee-584a-be1c-e908cf328d91", 00:20:04.884 "is_configured": true, 00:20:04.884 "data_offset": 2048, 00:20:04.884 "data_size": 63488 00:20:04.884 } 00:20:04.884 ] 00:20:04.884 }' 00:20:04.884 12:39:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:04.884 12:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:05.450 12:39:47 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:05.450 12:39:47 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:05.708 [2024-10-01 12:39:48.024867] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.708 12:39:48 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c422139a-9dbb-4a24-ab7f-c5ae70953371 00:20:05.708 12:39:48 -- bdev/bdev_raid.sh@380 -- # '[' -z c422139a-9dbb-4a24-ab7f-c5ae70953371 ']' 00:20:05.708 12:39:48 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:05.708 [2024-10-01 12:39:48.208433] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.708 [2024-10-01 12:39:48.208571] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.708 [2024-10-01 12:39:48.208746] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.708 [2024-10-01 12:39:48.208826] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.708 [2024-10-01 12:39:48.208996] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:20:05.708 12:39:48 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.708 12:39:48 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:05.966 12:39:48 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:05.966 12:39:48 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:05.966 12:39:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:05.966 12:39:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:06.225 12:39:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:06.225 12:39:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:06.225 12:39:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:06.225 12:39:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:06.484 12:39:48 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:06.484 12:39:48 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:06.742 12:39:49 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:06.742 12:39:49 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:06.742 12:39:49 -- common/autotest_common.sh@640 -- # local es=0 00:20:06.742 12:39:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:06.742 12:39:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.742 12:39:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:06.742 12:39:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.742 12:39:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:06.742 12:39:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.742 12:39:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:06.742 12:39:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.742 12:39:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:06.742 12:39:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:06.742 [2024-10-01 12:39:49.258958] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:06.742 [2024-10-01 12:39:49.261064] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:06.742 [2024-10-01 12:39:49.261202] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:06.742 [2024-10-01 12:39:49.261265] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:06.742 [2024-10-01 12:39:49.261404] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:06.742 [2024-10-01 12:39:49.261565] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:06.742 [2024-10-01 12:39:49.261642] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.742 [2024-10-01 12:39:49.261741] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:20:06.742 request: 00:20:06.742 { 00:20:06.742 "name": "raid_bdev1", 00:20:06.742 "raid_level": "concat", 00:20:06.742 "base_bdevs": [ 00:20:06.742 "malloc1", 00:20:06.742 "malloc2", 00:20:06.742 "malloc3" 00:20:06.742 ], 00:20:06.742 "superblock": false, 00:20:06.742 "strip_size_kb": 64, 00:20:06.742 "method": "bdev_raid_create", 00:20:06.742 "req_id": 1 00:20:06.742 } 00:20:06.742 Got JSON-RPC error response 00:20:06.742 response: 00:20:06.742 { 00:20:06.742 "code": -17, 00:20:06.742 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:06.742 } 00:20:07.001 12:39:49 -- common/autotest_common.sh@643 -- # es=1 00:20:07.001 12:39:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:07.001 12:39:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:07.001 12:39:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:07.001 12:39:49 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.001 12:39:49 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:07.001 12:39:49 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:07.001 12:39:49 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:07.001 12:39:49 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:07.261 [2024-10-01 12:39:49.626382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:07.261 [2024-10-01 12:39:49.626553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.261 [2024-10-01 12:39:49.626614] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:07.261 [2024-10-01 12:39:49.626691] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.261 [2024-10-01 12:39:49.629052] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.261 [2024-10-01 12:39:49.629204] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:07.261 [2024-10-01 12:39:49.629370] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:07.261 [2024-10-01 12:39:49.629503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:07.261 pt1 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.261 12:39:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.520 12:39:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.520 "name": "raid_bdev1", 00:20:07.520 "uuid": "c422139a-9dbb-4a24-ab7f-c5ae70953371", 00:20:07.520 "strip_size_kb": 64, 00:20:07.520 "state": "configuring", 00:20:07.520 "raid_level": "concat", 00:20:07.520 "superblock": true, 00:20:07.520 "num_base_bdevs": 3, 00:20:07.520 "num_base_bdevs_discovered": 1, 00:20:07.520 "num_base_bdevs_operational": 3, 00:20:07.520 "base_bdevs_list": [ 00:20:07.520 { 00:20:07.520 "name": "pt1", 00:20:07.520 "uuid": "13569828-7458-5594-a07a-195193e5d394", 00:20:07.520 "is_configured": true, 00:20:07.520 "data_offset": 2048, 00:20:07.520 "data_size": 63488 00:20:07.520 }, 00:20:07.520 { 00:20:07.520 "name": null, 00:20:07.520 "uuid": "63cca392-f1fc-59d4-9c75-902e279137d6", 00:20:07.520 "is_configured": false, 00:20:07.520 "data_offset": 2048, 00:20:07.520 "data_size": 63488 00:20:07.520 }, 00:20:07.520 { 00:20:07.520 "name": null, 00:20:07.520 "uuid": "0b84de62-2eee-584a-be1c-e908cf328d91", 00:20:07.520 "is_configured": false, 00:20:07.520 "data_offset": 2048, 00:20:07.520 "data_size": 63488 00:20:07.520 } 00:20:07.520 ] 00:20:07.520 }' 00:20:07.520 12:39:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.520 12:39:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.088 12:39:50 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:20:08.088 12:39:50 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.088 [2024-10-01 12:39:50.489135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.088 [2024-10-01 12:39:50.489327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.088 [2024-10-01 12:39:50.489447] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:08.088 [2024-10-01 12:39:50.489535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.088 [2024-10-01 12:39:50.489982] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.088 [2024-10-01 12:39:50.490116] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.088 [2024-10-01 12:39:50.490311] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:08.088 [2024-10-01 12:39:50.490430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.088 pt2 00:20:08.088 12:39:50 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:08.347 [2024-10-01 12:39:50.660902] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.347 12:39:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.347 "name": "raid_bdev1", 00:20:08.347 "uuid": "c422139a-9dbb-4a24-ab7f-c5ae70953371", 00:20:08.347 "strip_size_kb": 64, 00:20:08.347 "state": "configuring", 00:20:08.347 "raid_level": "concat", 00:20:08.347 "superblock": true, 00:20:08.347 "num_base_bdevs": 3, 00:20:08.347 "num_base_bdevs_discovered": 1, 00:20:08.347 "num_base_bdevs_operational": 3, 00:20:08.347 "base_bdevs_list": [ 00:20:08.347 { 00:20:08.347 "name": "pt1", 00:20:08.348 "uuid": "13569828-7458-5594-a07a-195193e5d394", 00:20:08.348 "is_configured": true, 00:20:08.348 "data_offset": 2048, 00:20:08.348 "data_size": 63488 00:20:08.348 }, 00:20:08.348 { 00:20:08.348 "name": null, 00:20:08.348 "uuid": "63cca392-f1fc-59d4-9c75-902e279137d6", 00:20:08.348 "is_configured": false, 00:20:08.348 "data_offset": 2048, 00:20:08.348 "data_size": 63488 00:20:08.348 }, 00:20:08.348 { 00:20:08.348 "name": null, 00:20:08.348 "uuid": "0b84de62-2eee-584a-be1c-e908cf328d91", 00:20:08.348 "is_configured": false, 00:20:08.348 "data_offset": 2048, 00:20:08.348 "data_size": 63488 00:20:08.348 } 00:20:08.348 ] 00:20:08.348 }' 00:20:08.348 12:39:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.348 12:39:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.915 12:39:51 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:08.916 12:39:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:08.916 12:39:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.175 [2024-10-01 12:39:51.571843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.175 [2024-10-01 12:39:51.572122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.175 [2024-10-01 12:39:51.572400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:09.175 [2024-10-01 12:39:51.572590] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.175 [2024-10-01 12:39:51.573236] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.175 [2024-10-01 12:39:51.573476] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.175 [2024-10-01 12:39:51.573775] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:09.175 [2024-10-01 12:39:51.573969] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.175 pt2 00:20:09.175 12:39:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:09.175 12:39:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:09.175 12:39:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:09.434 [2024-10-01 12:39:51.763539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:09.434 [2024-10-01 12:39:51.763792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.434 [2024-10-01 12:39:51.764098] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:09.434 [2024-10-01 12:39:51.764296] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.434 [2024-10-01 12:39:51.764847] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.434 [2024-10-01 12:39:51.765097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:09.435 [2024-10-01 12:39:51.765456] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:09.435 [2024-10-01 12:39:51.765645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:09.435 [2024-10-01 12:39:51.765921] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:20:09.435 [2024-10-01 12:39:51.766094] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:09.435 [2024-10-01 12:39:51.766355] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:09.435 [2024-10-01 12:39:51.766804] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:20:09.435 [2024-10-01 12:39:51.766988] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:20:09.435 [2024-10-01 12:39:51.767336] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.435 pt3 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:09.435 "name": "raid_bdev1", 00:20:09.435 "uuid": "c422139a-9dbb-4a24-ab7f-c5ae70953371", 00:20:09.435 "strip_size_kb": 64, 00:20:09.435 "state": "online", 00:20:09.435 "raid_level": "concat", 00:20:09.435 "superblock": true, 00:20:09.435 "num_base_bdevs": 3, 00:20:09.435 "num_base_bdevs_discovered": 3, 00:20:09.435 "num_base_bdevs_operational": 3, 00:20:09.435 "base_bdevs_list": [ 00:20:09.435 { 00:20:09.435 "name": "pt1", 00:20:09.435 "uuid": "13569828-7458-5594-a07a-195193e5d394", 00:20:09.435 "is_configured": true, 00:20:09.435 "data_offset": 2048, 00:20:09.435 "data_size": 63488 00:20:09.435 }, 00:20:09.435 { 00:20:09.435 "name": "pt2", 00:20:09.435 "uuid": "63cca392-f1fc-59d4-9c75-902e279137d6", 00:20:09.435 "is_configured": true, 00:20:09.435 "data_offset": 2048, 00:20:09.435 "data_size": 63488 00:20:09.435 }, 00:20:09.435 { 00:20:09.435 "name": "pt3", 00:20:09.435 "uuid": "0b84de62-2eee-584a-be1c-e908cf328d91", 00:20:09.435 "is_configured": true, 00:20:09.435 "data_offset": 2048, 00:20:09.435 "data_size": 63488 00:20:09.435 } 00:20:09.435 ] 00:20:09.435 }' 00:20:09.435 12:39:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:09.435 12:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.003 12:39:52 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:10.003 12:39:52 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:10.262 [2024-10-01 12:39:52.658527] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.262 12:39:52 -- bdev/bdev_raid.sh@430 -- # '[' c422139a-9dbb-4a24-ab7f-c5ae70953371 '!=' c422139a-9dbb-4a24-ab7f-c5ae70953371 ']' 00:20:10.262 12:39:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:20:10.262 12:39:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:10.262 12:39:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:10.262 12:39:52 -- bdev/bdev_raid.sh@511 -- # killprocess 117363 00:20:10.262 12:39:52 -- common/autotest_common.sh@926 -- # '[' -z 117363 ']' 00:20:10.262 12:39:52 -- common/autotest_common.sh@930 -- # kill -0 117363 00:20:10.262 12:39:52 -- common/autotest_common.sh@931 -- # uname 00:20:10.262 12:39:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.262 12:39:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117363 00:20:10.262 12:39:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:10.262 12:39:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:10.262 12:39:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117363' 00:20:10.262 killing process with pid 117363 00:20:10.262 12:39:52 -- common/autotest_common.sh@945 -- # kill 117363 00:20:10.262 [2024-10-01 12:39:52.727162] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:10.262 [2024-10-01 12:39:52.727229] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.262 [2024-10-01 12:39:52.727277] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:10.262 [2024-10-01 12:39:52.727286] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:20:10.262 12:39:52 -- common/autotest_common.sh@950 -- # wait 117363 00:20:10.521 [2024-10-01 12:39:52.956761] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:11.517 12:39:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:11.517 00:20:11.517 real 0m9.195s 00:20:11.517 user 0m15.121s 00:20:11.517 sys 0m1.548s 00:20:11.517 12:39:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.517 12:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:11.517 ************************************ 00:20:11.517 END TEST raid_superblock_test 00:20:11.517 ************************************ 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:20:11.776 12:39:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:20:11.776 12:39:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:11.776 12:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:11.776 ************************************ 00:20:11.776 START TEST raid_state_function_test 00:20:11.776 ************************************ 00:20:11.776 12:39:54 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:11.776 12:39:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:11.777 12:39:54 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:20:11.777 12:39:54 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:20:11.777 12:39:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:20:11.777 12:39:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:20:11.777 12:39:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=117656 00:20:11.777 12:39:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:11.777 12:39:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117656' 00:20:11.777 Process raid pid: 117656 00:20:11.777 12:39:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117656 /var/tmp/spdk-raid.sock 00:20:11.777 12:39:54 -- common/autotest_common.sh@819 -- # '[' -z 117656 ']' 00:20:11.777 12:39:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:11.777 12:39:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:11.777 12:39:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:11.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:11.777 12:39:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:11.777 12:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:11.777 [2024-10-01 12:39:54.199573] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:11.777 [2024-10-01 12:39:54.199736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.037 [2024-10-01 12:39:54.369300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.037 [2024-10-01 12:39:54.558931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.296 [2024-10-01 12:39:54.743049] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:12.555 12:39:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:12.555 12:39:54 -- common/autotest_common.sh@852 -- # return 0 00:20:12.555 12:39:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:12.814 [2024-10-01 12:39:55.146577] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:12.814 [2024-10-01 12:39:55.146656] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:12.814 [2024-10-01 12:39:55.146666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:12.814 [2024-10-01 12:39:55.146700] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:12.814 [2024-10-01 12:39:55.146706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:12.814 [2024-10-01 12:39:55.146748] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.814 "name": "Existed_Raid", 00:20:12.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.814 "strip_size_kb": 0, 00:20:12.814 "state": "configuring", 00:20:12.814 "raid_level": "raid1", 00:20:12.814 "superblock": false, 00:20:12.814 "num_base_bdevs": 3, 00:20:12.814 "num_base_bdevs_discovered": 0, 00:20:12.814 "num_base_bdevs_operational": 3, 00:20:12.814 "base_bdevs_list": [ 00:20:12.814 { 00:20:12.814 "name": "BaseBdev1", 00:20:12.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.814 "is_configured": false, 00:20:12.814 "data_offset": 0, 00:20:12.814 "data_size": 0 00:20:12.814 }, 00:20:12.814 { 00:20:12.814 "name": "BaseBdev2", 00:20:12.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.814 "is_configured": false, 00:20:12.814 "data_offset": 0, 00:20:12.814 "data_size": 0 00:20:12.814 }, 00:20:12.814 { 00:20:12.814 "name": "BaseBdev3", 00:20:12.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.814 "is_configured": false, 00:20:12.814 "data_offset": 0, 00:20:12.814 "data_size": 0 00:20:12.814 } 00:20:12.814 ] 00:20:12.814 }' 00:20:12.814 12:39:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.814 12:39:55 -- common/autotest_common.sh@10 -- # set +x 00:20:13.381 12:39:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:13.639 [2024-10-01 12:39:56.029189] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:13.639 [2024-10-01 12:39:56.029222] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:20:13.639 12:39:56 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:13.898 [2024-10-01 12:39:56.212940] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:13.898 [2024-10-01 12:39:56.213013] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:13.898 [2024-10-01 12:39:56.213021] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:13.898 [2024-10-01 12:39:56.213048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:13.898 [2024-10-01 12:39:56.213055] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:13.898 [2024-10-01 12:39:56.213080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:13.898 12:39:56 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:13.898 [2024-10-01 12:39:56.397431] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:13.898 BaseBdev1 00:20:13.898 12:39:56 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:13.898 12:39:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:13.898 12:39:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:13.898 12:39:56 -- common/autotest_common.sh@889 -- # local i 00:20:13.898 12:39:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:13.898 12:39:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:13.898 12:39:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.156 12:39:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:14.415 [ 00:20:14.415 { 00:20:14.415 "name": "BaseBdev1", 00:20:14.415 "aliases": [ 00:20:14.415 "46c6cfec-b099-4d53-9ba1-4dfbca80e2ea" 00:20:14.415 ], 00:20:14.415 "product_name": "Malloc disk", 00:20:14.415 "block_size": 512, 00:20:14.415 "num_blocks": 65536, 00:20:14.415 "uuid": "46c6cfec-b099-4d53-9ba1-4dfbca80e2ea", 00:20:14.415 "assigned_rate_limits": { 00:20:14.415 "rw_ios_per_sec": 0, 00:20:14.415 "rw_mbytes_per_sec": 0, 00:20:14.415 "r_mbytes_per_sec": 0, 00:20:14.415 "w_mbytes_per_sec": 0 00:20:14.415 }, 00:20:14.415 "claimed": true, 00:20:14.415 "claim_type": "exclusive_write", 00:20:14.415 "zoned": false, 00:20:14.415 "supported_io_types": { 00:20:14.415 "read": true, 00:20:14.415 "write": true, 00:20:14.415 "unmap": true, 00:20:14.415 "write_zeroes": true, 00:20:14.415 "flush": true, 00:20:14.415 "reset": true, 00:20:14.415 "compare": false, 00:20:14.415 "compare_and_write": false, 00:20:14.415 "abort": true, 00:20:14.415 "nvme_admin": false, 00:20:14.415 "nvme_io": false 00:20:14.415 }, 00:20:14.415 "memory_domains": [ 00:20:14.415 { 00:20:14.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.415 "dma_device_type": 2 00:20:14.415 } 00:20:14.415 ], 00:20:14.415 "driver_specific": {} 00:20:14.415 } 00:20:14.415 ] 00:20:14.415 12:39:56 -- common/autotest_common.sh@895 -- # return 0 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.415 12:39:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:14.415 "name": "Existed_Raid", 00:20:14.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.415 "strip_size_kb": 0, 00:20:14.415 "state": "configuring", 00:20:14.415 "raid_level": "raid1", 00:20:14.416 "superblock": false, 00:20:14.416 "num_base_bdevs": 3, 00:20:14.416 "num_base_bdevs_discovered": 1, 00:20:14.416 "num_base_bdevs_operational": 3, 00:20:14.416 "base_bdevs_list": [ 00:20:14.416 { 00:20:14.416 "name": "BaseBdev1", 00:20:14.416 "uuid": "46c6cfec-b099-4d53-9ba1-4dfbca80e2ea", 00:20:14.416 "is_configured": true, 00:20:14.416 "data_offset": 0, 00:20:14.416 "data_size": 65536 00:20:14.416 }, 00:20:14.416 { 00:20:14.416 "name": "BaseBdev2", 00:20:14.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.416 "is_configured": false, 00:20:14.416 "data_offset": 0, 00:20:14.416 "data_size": 0 00:20:14.416 }, 00:20:14.416 { 00:20:14.416 "name": "BaseBdev3", 00:20:14.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.416 "is_configured": false, 00:20:14.416 "data_offset": 0, 00:20:14.416 "data_size": 0 00:20:14.416 } 00:20:14.416 ] 00:20:14.416 }' 00:20:14.416 12:39:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:14.416 12:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.983 12:39:57 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:15.241 [2024-10-01 12:39:57.639952] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:15.241 [2024-10-01 12:39:57.639991] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:15.241 12:39:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:20:15.241 12:39:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:15.501 [2024-10-01 12:39:57.795996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:15.501 [2024-10-01 12:39:57.798107] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:15.501 [2024-10-01 12:39:57.798164] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:15.501 [2024-10-01 12:39:57.798172] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:15.501 [2024-10-01 12:39:57.798212] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.501 12:39:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.501 12:39:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.501 "name": "Existed_Raid", 00:20:15.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.501 "strip_size_kb": 0, 00:20:15.501 "state": "configuring", 00:20:15.501 "raid_level": "raid1", 00:20:15.501 "superblock": false, 00:20:15.501 "num_base_bdevs": 3, 00:20:15.501 "num_base_bdevs_discovered": 1, 00:20:15.501 "num_base_bdevs_operational": 3, 00:20:15.501 "base_bdevs_list": [ 00:20:15.501 { 00:20:15.501 "name": "BaseBdev1", 00:20:15.501 "uuid": "46c6cfec-b099-4d53-9ba1-4dfbca80e2ea", 00:20:15.501 "is_configured": true, 00:20:15.501 "data_offset": 0, 00:20:15.501 "data_size": 65536 00:20:15.501 }, 00:20:15.501 { 00:20:15.501 "name": "BaseBdev2", 00:20:15.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.501 "is_configured": false, 00:20:15.501 "data_offset": 0, 00:20:15.501 "data_size": 0 00:20:15.501 }, 00:20:15.501 { 00:20:15.501 "name": "BaseBdev3", 00:20:15.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.501 "is_configured": false, 00:20:15.501 "data_offset": 0, 00:20:15.501 "data_size": 0 00:20:15.501 } 00:20:15.501 ] 00:20:15.501 }' 00:20:15.501 12:39:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.501 12:39:58 -- common/autotest_common.sh@10 -- # set +x 00:20:16.069 12:39:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:16.329 [2024-10-01 12:39:58.742669] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.329 BaseBdev2 00:20:16.329 12:39:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:16.329 12:39:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:20:16.329 12:39:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:16.329 12:39:58 -- common/autotest_common.sh@889 -- # local i 00:20:16.329 12:39:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:16.329 12:39:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:16.329 12:39:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:16.589 12:39:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:16.589 [ 00:20:16.589 { 00:20:16.589 "name": "BaseBdev2", 00:20:16.589 "aliases": [ 00:20:16.589 "e794dd91-4e10-47dc-844f-f724a3136628" 00:20:16.589 ], 00:20:16.589 "product_name": "Malloc disk", 00:20:16.589 "block_size": 512, 00:20:16.589 "num_blocks": 65536, 00:20:16.589 "uuid": "e794dd91-4e10-47dc-844f-f724a3136628", 00:20:16.589 "assigned_rate_limits": { 00:20:16.589 "rw_ios_per_sec": 0, 00:20:16.589 "rw_mbytes_per_sec": 0, 00:20:16.589 "r_mbytes_per_sec": 0, 00:20:16.589 "w_mbytes_per_sec": 0 00:20:16.589 }, 00:20:16.589 "claimed": true, 00:20:16.589 "claim_type": "exclusive_write", 00:20:16.589 "zoned": false, 00:20:16.589 "supported_io_types": { 00:20:16.589 "read": true, 00:20:16.589 "write": true, 00:20:16.589 "unmap": true, 00:20:16.589 "write_zeroes": true, 00:20:16.589 "flush": true, 00:20:16.589 "reset": true, 00:20:16.589 "compare": false, 00:20:16.589 "compare_and_write": false, 00:20:16.589 "abort": true, 00:20:16.589 "nvme_admin": false, 00:20:16.589 "nvme_io": false 00:20:16.589 }, 00:20:16.589 "memory_domains": [ 00:20:16.589 { 00:20:16.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.589 "dma_device_type": 2 00:20:16.589 } 00:20:16.589 ], 00:20:16.589 "driver_specific": {} 00:20:16.589 } 00:20:16.589 ] 00:20:16.589 12:39:59 -- common/autotest_common.sh@895 -- # return 0 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.589 12:39:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.848 12:39:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.848 "name": "Existed_Raid", 00:20:16.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.848 "strip_size_kb": 0, 00:20:16.848 "state": "configuring", 00:20:16.848 "raid_level": "raid1", 00:20:16.848 "superblock": false, 00:20:16.848 "num_base_bdevs": 3, 00:20:16.848 "num_base_bdevs_discovered": 2, 00:20:16.848 "num_base_bdevs_operational": 3, 00:20:16.848 "base_bdevs_list": [ 00:20:16.848 { 00:20:16.848 "name": "BaseBdev1", 00:20:16.848 "uuid": "46c6cfec-b099-4d53-9ba1-4dfbca80e2ea", 00:20:16.848 "is_configured": true, 00:20:16.848 "data_offset": 0, 00:20:16.848 "data_size": 65536 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "name": "BaseBdev2", 00:20:16.848 "uuid": "e794dd91-4e10-47dc-844f-f724a3136628", 00:20:16.848 "is_configured": true, 00:20:16.848 "data_offset": 0, 00:20:16.848 "data_size": 65536 00:20:16.848 }, 00:20:16.848 { 00:20:16.848 "name": "BaseBdev3", 00:20:16.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.848 "is_configured": false, 00:20:16.848 "data_offset": 0, 00:20:16.848 "data_size": 0 00:20:16.848 } 00:20:16.848 ] 00:20:16.848 }' 00:20:16.848 12:39:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.848 12:39:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.417 12:39:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:17.675 [2024-10-01 12:40:00.011517] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:17.675 [2024-10-01 12:40:00.011563] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:20:17.675 [2024-10-01 12:40:00.011587] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:17.675 [2024-10-01 12:40:00.011718] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:17.675 [2024-10-01 12:40:00.012105] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:20:17.675 [2024-10-01 12:40:00.012131] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:20:17.675 [2024-10-01 12:40:00.012371] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.675 BaseBdev3 00:20:17.675 12:40:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:17.675 12:40:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:20:17.675 12:40:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:17.675 12:40:00 -- common/autotest_common.sh@889 -- # local i 00:20:17.675 12:40:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:17.675 12:40:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:17.675 12:40:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:17.675 12:40:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:17.935 [ 00:20:17.935 { 00:20:17.935 "name": "BaseBdev3", 00:20:17.935 "aliases": [ 00:20:17.935 "a99198c5-8b20-4d3e-a595-61811da83fdb" 00:20:17.935 ], 00:20:17.935 "product_name": "Malloc disk", 00:20:17.935 "block_size": 512, 00:20:17.935 "num_blocks": 65536, 00:20:17.935 "uuid": "a99198c5-8b20-4d3e-a595-61811da83fdb", 00:20:17.935 "assigned_rate_limits": { 00:20:17.935 "rw_ios_per_sec": 0, 00:20:17.935 "rw_mbytes_per_sec": 0, 00:20:17.935 "r_mbytes_per_sec": 0, 00:20:17.935 "w_mbytes_per_sec": 0 00:20:17.935 }, 00:20:17.935 "claimed": true, 00:20:17.935 "claim_type": "exclusive_write", 00:20:17.935 "zoned": false, 00:20:17.935 "supported_io_types": { 00:20:17.935 "read": true, 00:20:17.935 "write": true, 00:20:17.935 "unmap": true, 00:20:17.935 "write_zeroes": true, 00:20:17.935 "flush": true, 00:20:17.935 "reset": true, 00:20:17.935 "compare": false, 00:20:17.935 "compare_and_write": false, 00:20:17.935 "abort": true, 00:20:17.935 "nvme_admin": false, 00:20:17.935 "nvme_io": false 00:20:17.935 }, 00:20:17.935 "memory_domains": [ 00:20:17.935 { 00:20:17.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.935 "dma_device_type": 2 00:20:17.935 } 00:20:17.935 ], 00:20:17.935 "driver_specific": {} 00:20:17.935 } 00:20:17.935 ] 00:20:17.935 12:40:00 -- common/autotest_common.sh@895 -- # return 0 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.935 12:40:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.195 12:40:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:18.195 "name": "Existed_Raid", 00:20:18.195 "uuid": "4b1453a3-ce3e-4cac-bbe0-a77a59b479b1", 00:20:18.195 "strip_size_kb": 0, 00:20:18.195 "state": "online", 00:20:18.195 "raid_level": "raid1", 00:20:18.195 "superblock": false, 00:20:18.195 "num_base_bdevs": 3, 00:20:18.195 "num_base_bdevs_discovered": 3, 00:20:18.195 "num_base_bdevs_operational": 3, 00:20:18.195 "base_bdevs_list": [ 00:20:18.195 { 00:20:18.195 "name": "BaseBdev1", 00:20:18.195 "uuid": "46c6cfec-b099-4d53-9ba1-4dfbca80e2ea", 00:20:18.195 "is_configured": true, 00:20:18.195 "data_offset": 0, 00:20:18.195 "data_size": 65536 00:20:18.195 }, 00:20:18.195 { 00:20:18.195 "name": "BaseBdev2", 00:20:18.195 "uuid": "e794dd91-4e10-47dc-844f-f724a3136628", 00:20:18.195 "is_configured": true, 00:20:18.195 "data_offset": 0, 00:20:18.195 "data_size": 65536 00:20:18.195 }, 00:20:18.195 { 00:20:18.195 "name": "BaseBdev3", 00:20:18.195 "uuid": "a99198c5-8b20-4d3e-a595-61811da83fdb", 00:20:18.195 "is_configured": true, 00:20:18.195 "data_offset": 0, 00:20:18.195 "data_size": 65536 00:20:18.195 } 00:20:18.195 ] 00:20:18.195 }' 00:20:18.195 12:40:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:18.195 12:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:18.763 12:40:01 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:18.763 [2024-10-01 12:40:01.249793] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:19.022 "name": "Existed_Raid", 00:20:19.022 "uuid": "4b1453a3-ce3e-4cac-bbe0-a77a59b479b1", 00:20:19.022 "strip_size_kb": 0, 00:20:19.022 "state": "online", 00:20:19.022 "raid_level": "raid1", 00:20:19.022 "superblock": false, 00:20:19.022 "num_base_bdevs": 3, 00:20:19.022 "num_base_bdevs_discovered": 2, 00:20:19.022 "num_base_bdevs_operational": 2, 00:20:19.022 "base_bdevs_list": [ 00:20:19.022 { 00:20:19.022 "name": null, 00:20:19.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.022 "is_configured": false, 00:20:19.022 "data_offset": 0, 00:20:19.022 "data_size": 65536 00:20:19.022 }, 00:20:19.022 { 00:20:19.022 "name": "BaseBdev2", 00:20:19.022 "uuid": "e794dd91-4e10-47dc-844f-f724a3136628", 00:20:19.022 "is_configured": true, 00:20:19.022 "data_offset": 0, 00:20:19.022 "data_size": 65536 00:20:19.022 }, 00:20:19.022 { 00:20:19.022 "name": "BaseBdev3", 00:20:19.022 "uuid": "a99198c5-8b20-4d3e-a595-61811da83fdb", 00:20:19.022 "is_configured": true, 00:20:19.022 "data_offset": 0, 00:20:19.022 "data_size": 65536 00:20:19.022 } 00:20:19.022 ] 00:20:19.022 }' 00:20:19.022 12:40:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:19.022 12:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:19.589 12:40:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:19.589 12:40:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:19.589 12:40:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.589 12:40:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:19.848 12:40:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:19.848 12:40:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:19.848 12:40:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:20.107 [2024-10-01 12:40:02.386174] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:20.107 12:40:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:20.107 12:40:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:20.107 12:40:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:20.107 12:40:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.366 12:40:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:20.366 12:40:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:20.366 12:40:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:20.366 [2024-10-01 12:40:02.816179] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:20.366 [2024-10-01 12:40:02.816207] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:20.366 [2024-10-01 12:40:02.816265] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:20.366 [2024-10-01 12:40:02.900006] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:20.366 [2024-10-01 12:40:02.900046] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:20:20.626 12:40:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:20.626 12:40:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:20.626 12:40:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.626 12:40:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:20.626 12:40:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:20.626 12:40:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:20.626 12:40:03 -- bdev/bdev_raid.sh@287 -- # killprocess 117656 00:20:20.626 12:40:03 -- common/autotest_common.sh@926 -- # '[' -z 117656 ']' 00:20:20.626 12:40:03 -- common/autotest_common.sh@930 -- # kill -0 117656 00:20:20.626 12:40:03 -- common/autotest_common.sh@931 -- # uname 00:20:20.626 12:40:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:20.626 12:40:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117656 00:20:20.626 12:40:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:20.626 12:40:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:20.626 12:40:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117656' 00:20:20.626 killing process with pid 117656 00:20:20.626 12:40:03 -- common/autotest_common.sh@945 -- # kill 117656 00:20:20.626 [2024-10-01 12:40:03.139424] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:20.626 [2024-10-01 12:40:03.139547] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.626 12:40:03 -- common/autotest_common.sh@950 -- # wait 117656 00:20:22.007 ************************************ 00:20:22.007 END TEST raid_state_function_test 00:20:22.007 ************************************ 00:20:22.007 12:40:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:22.007 00:20:22.007 real 0m10.237s 00:20:22.007 user 0m17.034s 00:20:22.007 sys 0m1.730s 00:20:22.007 12:40:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:22.007 12:40:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.007 12:40:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:20:22.007 12:40:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:20:22.007 12:40:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:22.007 12:40:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.007 ************************************ 00:20:22.007 START TEST raid_state_function_test_sb 00:20:22.007 ************************************ 00:20:22.007 12:40:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:20:22.007 12:40:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:20:22.007 12:40:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:20:22.007 12:40:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:20:22.007 12:40:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:22.007 12:40:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:22.007 12:40:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:22.007 12:40:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=118014 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:22.008 Process raid pid: 118014 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118014' 00:20:22.008 12:40:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118014 /var/tmp/spdk-raid.sock 00:20:22.008 12:40:04 -- common/autotest_common.sh@819 -- # '[' -z 118014 ']' 00:20:22.008 12:40:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:22.008 12:40:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:22.008 12:40:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:22.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:22.008 12:40:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:22.008 12:40:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.008 [2024-10-01 12:40:04.523282] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:22.008 [2024-10-01 12:40:04.523596] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.267 [2024-10-01 12:40:04.693650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.527 [2024-10-01 12:40:04.879786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.786 [2024-10-01 12:40:05.070349] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.786 12:40:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:22.786 12:40:05 -- common/autotest_common.sh@852 -- # return 0 00:20:22.786 12:40:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:23.044 [2024-10-01 12:40:05.473432] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:23.044 [2024-10-01 12:40:05.473652] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:23.044 [2024-10-01 12:40:05.473739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:23.044 [2024-10-01 12:40:05.473792] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:23.044 [2024-10-01 12:40:05.473817] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:23.044 [2024-10-01 12:40:05.473880] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.044 12:40:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.303 12:40:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.303 "name": "Existed_Raid", 00:20:23.303 "uuid": "90f172a0-b2df-43b9-9948-e1867b977105", 00:20:23.303 "strip_size_kb": 0, 00:20:23.303 "state": "configuring", 00:20:23.303 "raid_level": "raid1", 00:20:23.303 "superblock": true, 00:20:23.303 "num_base_bdevs": 3, 00:20:23.303 "num_base_bdevs_discovered": 0, 00:20:23.303 "num_base_bdevs_operational": 3, 00:20:23.303 "base_bdevs_list": [ 00:20:23.303 { 00:20:23.303 "name": "BaseBdev1", 00:20:23.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.303 "is_configured": false, 00:20:23.303 "data_offset": 0, 00:20:23.303 "data_size": 0 00:20:23.303 }, 00:20:23.303 { 00:20:23.303 "name": "BaseBdev2", 00:20:23.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.303 "is_configured": false, 00:20:23.303 "data_offset": 0, 00:20:23.303 "data_size": 0 00:20:23.303 }, 00:20:23.303 { 00:20:23.303 "name": "BaseBdev3", 00:20:23.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.303 "is_configured": false, 00:20:23.303 "data_offset": 0, 00:20:23.303 "data_size": 0 00:20:23.303 } 00:20:23.303 ] 00:20:23.303 }' 00:20:23.303 12:40:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.303 12:40:05 -- common/autotest_common.sh@10 -- # set +x 00:20:23.867 12:40:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:23.867 [2024-10-01 12:40:06.364081] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:23.867 [2024-10-01 12:40:06.364230] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:20:23.867 12:40:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:24.125 [2024-10-01 12:40:06.540033] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:24.125 [2024-10-01 12:40:06.540202] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:24.125 [2024-10-01 12:40:06.540281] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:24.125 [2024-10-01 12:40:06.540339] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:24.125 [2024-10-01 12:40:06.540365] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:24.125 [2024-10-01 12:40:06.540408] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:24.125 12:40:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:24.385 [2024-10-01 12:40:06.748522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.385 BaseBdev1 00:20:24.385 12:40:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:24.385 12:40:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:24.385 12:40:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:24.385 12:40:06 -- common/autotest_common.sh@889 -- # local i 00:20:24.385 12:40:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:24.385 12:40:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:24.385 12:40:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:24.644 12:40:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:24.644 [ 00:20:24.644 { 00:20:24.644 "name": "BaseBdev1", 00:20:24.644 "aliases": [ 00:20:24.644 "c2d4c7a3-d7e3-4bc0-9daa-85096739b365" 00:20:24.644 ], 00:20:24.644 "product_name": "Malloc disk", 00:20:24.644 "block_size": 512, 00:20:24.644 "num_blocks": 65536, 00:20:24.644 "uuid": "c2d4c7a3-d7e3-4bc0-9daa-85096739b365", 00:20:24.644 "assigned_rate_limits": { 00:20:24.644 "rw_ios_per_sec": 0, 00:20:24.644 "rw_mbytes_per_sec": 0, 00:20:24.644 "r_mbytes_per_sec": 0, 00:20:24.644 "w_mbytes_per_sec": 0 00:20:24.644 }, 00:20:24.644 "claimed": true, 00:20:24.644 "claim_type": "exclusive_write", 00:20:24.644 "zoned": false, 00:20:24.644 "supported_io_types": { 00:20:24.644 "read": true, 00:20:24.644 "write": true, 00:20:24.644 "unmap": true, 00:20:24.644 "write_zeroes": true, 00:20:24.644 "flush": true, 00:20:24.644 "reset": true, 00:20:24.644 "compare": false, 00:20:24.644 "compare_and_write": false, 00:20:24.644 "abort": true, 00:20:24.644 "nvme_admin": false, 00:20:24.644 "nvme_io": false 00:20:24.644 }, 00:20:24.644 "memory_domains": [ 00:20:24.644 { 00:20:24.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.644 "dma_device_type": 2 00:20:24.644 } 00:20:24.644 ], 00:20:24.644 "driver_specific": {} 00:20:24.644 } 00:20:24.644 ] 00:20:24.644 12:40:07 -- common/autotest_common.sh@895 -- # return 0 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.644 12:40:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.902 12:40:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.902 "name": "Existed_Raid", 00:20:24.902 "uuid": "1e8f4f61-c095-465d-aa4e-861d249a53cf", 00:20:24.902 "strip_size_kb": 0, 00:20:24.902 "state": "configuring", 00:20:24.902 "raid_level": "raid1", 00:20:24.902 "superblock": true, 00:20:24.902 "num_base_bdevs": 3, 00:20:24.902 "num_base_bdevs_discovered": 1, 00:20:24.902 "num_base_bdevs_operational": 3, 00:20:24.902 "base_bdevs_list": [ 00:20:24.902 { 00:20:24.902 "name": "BaseBdev1", 00:20:24.902 "uuid": "c2d4c7a3-d7e3-4bc0-9daa-85096739b365", 00:20:24.902 "is_configured": true, 00:20:24.902 "data_offset": 2048, 00:20:24.902 "data_size": 63488 00:20:24.902 }, 00:20:24.902 { 00:20:24.902 "name": "BaseBdev2", 00:20:24.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.902 "is_configured": false, 00:20:24.902 "data_offset": 0, 00:20:24.902 "data_size": 0 00:20:24.902 }, 00:20:24.902 { 00:20:24.902 "name": "BaseBdev3", 00:20:24.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.902 "is_configured": false, 00:20:24.902 "data_offset": 0, 00:20:24.902 "data_size": 0 00:20:24.902 } 00:20:24.902 ] 00:20:24.902 }' 00:20:24.902 12:40:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.902 12:40:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.469 12:40:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:25.727 [2024-10-01 12:40:08.006909] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:25.727 [2024-10-01 12:40:08.007060] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:25.727 12:40:08 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:25.727 12:40:08 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:25.985 12:40:08 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:25.985 BaseBdev1 00:20:25.985 12:40:08 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:25.985 12:40:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:25.985 12:40:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:25.985 12:40:08 -- common/autotest_common.sh@889 -- # local i 00:20:25.985 12:40:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:25.985 12:40:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:25.985 12:40:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:26.243 12:40:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:26.501 [ 00:20:26.501 { 00:20:26.501 "name": "BaseBdev1", 00:20:26.501 "aliases": [ 00:20:26.501 "c6226a1d-ee4d-4db7-b12e-3fbce9a0415d" 00:20:26.501 ], 00:20:26.501 "product_name": "Malloc disk", 00:20:26.501 "block_size": 512, 00:20:26.501 "num_blocks": 65536, 00:20:26.501 "uuid": "c6226a1d-ee4d-4db7-b12e-3fbce9a0415d", 00:20:26.501 "assigned_rate_limits": { 00:20:26.501 "rw_ios_per_sec": 0, 00:20:26.501 "rw_mbytes_per_sec": 0, 00:20:26.501 "r_mbytes_per_sec": 0, 00:20:26.501 "w_mbytes_per_sec": 0 00:20:26.501 }, 00:20:26.501 "claimed": false, 00:20:26.501 "zoned": false, 00:20:26.501 "supported_io_types": { 00:20:26.501 "read": true, 00:20:26.501 "write": true, 00:20:26.501 "unmap": true, 00:20:26.501 "write_zeroes": true, 00:20:26.501 "flush": true, 00:20:26.501 "reset": true, 00:20:26.501 "compare": false, 00:20:26.501 "compare_and_write": false, 00:20:26.501 "abort": true, 00:20:26.501 "nvme_admin": false, 00:20:26.501 "nvme_io": false 00:20:26.501 }, 00:20:26.501 "memory_domains": [ 00:20:26.501 { 00:20:26.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.501 "dma_device_type": 2 00:20:26.501 } 00:20:26.501 ], 00:20:26.501 "driver_specific": {} 00:20:26.501 } 00:20:26.501 ] 00:20:26.501 12:40:08 -- common/autotest_common.sh@895 -- # return 0 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:26.501 [2024-10-01 12:40:08.932361] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:26.501 [2024-10-01 12:40:08.934567] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:26.501 [2024-10-01 12:40:08.934745] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:26.501 [2024-10-01 12:40:08.934839] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:26.501 [2024-10-01 12:40:08.934901] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.501 12:40:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.758 12:40:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.759 "name": "Existed_Raid", 00:20:26.759 "uuid": "6c4b9432-a6a9-4c60-ae2c-064ad900a480", 00:20:26.759 "strip_size_kb": 0, 00:20:26.759 "state": "configuring", 00:20:26.759 "raid_level": "raid1", 00:20:26.759 "superblock": true, 00:20:26.759 "num_base_bdevs": 3, 00:20:26.759 "num_base_bdevs_discovered": 1, 00:20:26.759 "num_base_bdevs_operational": 3, 00:20:26.759 "base_bdevs_list": [ 00:20:26.759 { 00:20:26.759 "name": "BaseBdev1", 00:20:26.759 "uuid": "c6226a1d-ee4d-4db7-b12e-3fbce9a0415d", 00:20:26.759 "is_configured": true, 00:20:26.759 "data_offset": 2048, 00:20:26.759 "data_size": 63488 00:20:26.759 }, 00:20:26.759 { 00:20:26.759 "name": "BaseBdev2", 00:20:26.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.759 "is_configured": false, 00:20:26.759 "data_offset": 0, 00:20:26.759 "data_size": 0 00:20:26.759 }, 00:20:26.759 { 00:20:26.759 "name": "BaseBdev3", 00:20:26.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.759 "is_configured": false, 00:20:26.759 "data_offset": 0, 00:20:26.759 "data_size": 0 00:20:26.759 } 00:20:26.759 ] 00:20:26.759 }' 00:20:26.759 12:40:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.759 12:40:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.325 12:40:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:27.584 [2024-10-01 12:40:09.894057] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.584 BaseBdev2 00:20:27.584 12:40:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:27.584 12:40:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:20:27.584 12:40:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:27.584 12:40:09 -- common/autotest_common.sh@889 -- # local i 00:20:27.584 12:40:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:27.584 12:40:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:27.584 12:40:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:27.843 12:40:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:27.843 [ 00:20:27.843 { 00:20:27.843 "name": "BaseBdev2", 00:20:27.843 "aliases": [ 00:20:27.843 "230c4b4e-35d1-4301-a234-cc070c45134d" 00:20:27.843 ], 00:20:27.843 "product_name": "Malloc disk", 00:20:27.843 "block_size": 512, 00:20:27.843 "num_blocks": 65536, 00:20:27.843 "uuid": "230c4b4e-35d1-4301-a234-cc070c45134d", 00:20:27.843 "assigned_rate_limits": { 00:20:27.843 "rw_ios_per_sec": 0, 00:20:27.843 "rw_mbytes_per_sec": 0, 00:20:27.843 "r_mbytes_per_sec": 0, 00:20:27.843 "w_mbytes_per_sec": 0 00:20:27.843 }, 00:20:27.843 "claimed": true, 00:20:27.843 "claim_type": "exclusive_write", 00:20:27.843 "zoned": false, 00:20:27.843 "supported_io_types": { 00:20:27.843 "read": true, 00:20:27.843 "write": true, 00:20:27.843 "unmap": true, 00:20:27.843 "write_zeroes": true, 00:20:27.843 "flush": true, 00:20:27.843 "reset": true, 00:20:27.843 "compare": false, 00:20:27.843 "compare_and_write": false, 00:20:27.843 "abort": true, 00:20:27.843 "nvme_admin": false, 00:20:27.843 "nvme_io": false 00:20:27.843 }, 00:20:27.843 "memory_domains": [ 00:20:27.843 { 00:20:27.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.843 "dma_device_type": 2 00:20:27.843 } 00:20:27.843 ], 00:20:27.843 "driver_specific": {} 00:20:27.843 } 00:20:27.843 ] 00:20:27.843 12:40:10 -- common/autotest_common.sh@895 -- # return 0 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.843 12:40:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.102 12:40:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.102 "name": "Existed_Raid", 00:20:28.102 "uuid": "6c4b9432-a6a9-4c60-ae2c-064ad900a480", 00:20:28.102 "strip_size_kb": 0, 00:20:28.102 "state": "configuring", 00:20:28.102 "raid_level": "raid1", 00:20:28.102 "superblock": true, 00:20:28.102 "num_base_bdevs": 3, 00:20:28.102 "num_base_bdevs_discovered": 2, 00:20:28.102 "num_base_bdevs_operational": 3, 00:20:28.102 "base_bdevs_list": [ 00:20:28.102 { 00:20:28.102 "name": "BaseBdev1", 00:20:28.102 "uuid": "c6226a1d-ee4d-4db7-b12e-3fbce9a0415d", 00:20:28.102 "is_configured": true, 00:20:28.102 "data_offset": 2048, 00:20:28.102 "data_size": 63488 00:20:28.102 }, 00:20:28.102 { 00:20:28.102 "name": "BaseBdev2", 00:20:28.102 "uuid": "230c4b4e-35d1-4301-a234-cc070c45134d", 00:20:28.102 "is_configured": true, 00:20:28.102 "data_offset": 2048, 00:20:28.102 "data_size": 63488 00:20:28.102 }, 00:20:28.102 { 00:20:28.102 "name": "BaseBdev3", 00:20:28.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.102 "is_configured": false, 00:20:28.102 "data_offset": 0, 00:20:28.102 "data_size": 0 00:20:28.102 } 00:20:28.102 ] 00:20:28.102 }' 00:20:28.102 12:40:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.102 12:40:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.672 12:40:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:28.947 [2024-10-01 12:40:11.234758] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:28.947 [2024-10-01 12:40:11.235129] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:20:28.947 BaseBdev3 00:20:28.947 [2024-10-01 12:40:11.236392] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:28.947 [2024-10-01 12:40:11.236618] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:28.947 [2024-10-01 12:40:11.237103] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:20:28.947 [2024-10-01 12:40:11.237215] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:20:28.947 [2024-10-01 12:40:11.237457] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.947 12:40:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:28.947 12:40:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:20:28.947 12:40:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:28.947 12:40:11 -- common/autotest_common.sh@889 -- # local i 00:20:28.947 12:40:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:28.947 12:40:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:28.947 12:40:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:28.947 12:40:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:29.223 [ 00:20:29.223 { 00:20:29.223 "name": "BaseBdev3", 00:20:29.223 "aliases": [ 00:20:29.223 "79b0db85-c516-4685-be9e-599238d812d5" 00:20:29.223 ], 00:20:29.223 "product_name": "Malloc disk", 00:20:29.223 "block_size": 512, 00:20:29.223 "num_blocks": 65536, 00:20:29.223 "uuid": "79b0db85-c516-4685-be9e-599238d812d5", 00:20:29.223 "assigned_rate_limits": { 00:20:29.223 "rw_ios_per_sec": 0, 00:20:29.223 "rw_mbytes_per_sec": 0, 00:20:29.223 "r_mbytes_per_sec": 0, 00:20:29.223 "w_mbytes_per_sec": 0 00:20:29.223 }, 00:20:29.223 "claimed": true, 00:20:29.223 "claim_type": "exclusive_write", 00:20:29.223 "zoned": false, 00:20:29.223 "supported_io_types": { 00:20:29.223 "read": true, 00:20:29.223 "write": true, 00:20:29.223 "unmap": true, 00:20:29.223 "write_zeroes": true, 00:20:29.223 "flush": true, 00:20:29.223 "reset": true, 00:20:29.223 "compare": false, 00:20:29.223 "compare_and_write": false, 00:20:29.223 "abort": true, 00:20:29.223 "nvme_admin": false, 00:20:29.223 "nvme_io": false 00:20:29.223 }, 00:20:29.223 "memory_domains": [ 00:20:29.223 { 00:20:29.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.223 "dma_device_type": 2 00:20:29.223 } 00:20:29.223 ], 00:20:29.223 "driver_specific": {} 00:20:29.223 } 00:20:29.223 ] 00:20:29.223 12:40:11 -- common/autotest_common.sh@895 -- # return 0 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.223 12:40:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.482 12:40:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:29.482 "name": "Existed_Raid", 00:20:29.482 "uuid": "6c4b9432-a6a9-4c60-ae2c-064ad900a480", 00:20:29.482 "strip_size_kb": 0, 00:20:29.482 "state": "online", 00:20:29.482 "raid_level": "raid1", 00:20:29.482 "superblock": true, 00:20:29.482 "num_base_bdevs": 3, 00:20:29.482 "num_base_bdevs_discovered": 3, 00:20:29.482 "num_base_bdevs_operational": 3, 00:20:29.482 "base_bdevs_list": [ 00:20:29.482 { 00:20:29.482 "name": "BaseBdev1", 00:20:29.482 "uuid": "c6226a1d-ee4d-4db7-b12e-3fbce9a0415d", 00:20:29.482 "is_configured": true, 00:20:29.482 "data_offset": 2048, 00:20:29.482 "data_size": 63488 00:20:29.482 }, 00:20:29.482 { 00:20:29.482 "name": "BaseBdev2", 00:20:29.482 "uuid": "230c4b4e-35d1-4301-a234-cc070c45134d", 00:20:29.482 "is_configured": true, 00:20:29.482 "data_offset": 2048, 00:20:29.482 "data_size": 63488 00:20:29.482 }, 00:20:29.482 { 00:20:29.482 "name": "BaseBdev3", 00:20:29.482 "uuid": "79b0db85-c516-4685-be9e-599238d812d5", 00:20:29.482 "is_configured": true, 00:20:29.482 "data_offset": 2048, 00:20:29.482 "data_size": 63488 00:20:29.482 } 00:20:29.482 ] 00:20:29.482 }' 00:20:29.482 12:40:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:29.482 12:40:11 -- common/autotest_common.sh@10 -- # set +x 00:20:30.051 12:40:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:30.051 [2024-10-01 12:40:12.509030] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.309 "name": "Existed_Raid", 00:20:30.309 "uuid": "6c4b9432-a6a9-4c60-ae2c-064ad900a480", 00:20:30.309 "strip_size_kb": 0, 00:20:30.309 "state": "online", 00:20:30.309 "raid_level": "raid1", 00:20:30.309 "superblock": true, 00:20:30.309 "num_base_bdevs": 3, 00:20:30.309 "num_base_bdevs_discovered": 2, 00:20:30.309 "num_base_bdevs_operational": 2, 00:20:30.309 "base_bdevs_list": [ 00:20:30.309 { 00:20:30.309 "name": null, 00:20:30.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.309 "is_configured": false, 00:20:30.309 "data_offset": 2048, 00:20:30.309 "data_size": 63488 00:20:30.309 }, 00:20:30.309 { 00:20:30.309 "name": "BaseBdev2", 00:20:30.309 "uuid": "230c4b4e-35d1-4301-a234-cc070c45134d", 00:20:30.309 "is_configured": true, 00:20:30.309 "data_offset": 2048, 00:20:30.309 "data_size": 63488 00:20:30.309 }, 00:20:30.309 { 00:20:30.309 "name": "BaseBdev3", 00:20:30.309 "uuid": "79b0db85-c516-4685-be9e-599238d812d5", 00:20:30.309 "is_configured": true, 00:20:30.309 "data_offset": 2048, 00:20:30.309 "data_size": 63488 00:20:30.309 } 00:20:30.309 ] 00:20:30.309 }' 00:20:30.309 12:40:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.309 12:40:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.877 12:40:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:30.877 12:40:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:30.877 12:40:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.877 12:40:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:31.136 12:40:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:31.136 12:40:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:31.136 12:40:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:31.136 [2024-10-01 12:40:13.668796] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:31.395 12:40:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:31.395 12:40:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:31.395 12:40:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:31.395 12:40:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.654 12:40:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:31.654 12:40:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:31.654 12:40:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:31.654 [2024-10-01 12:40:14.122483] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:31.654 [2024-10-01 12:40:14.122662] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:31.654 [2024-10-01 12:40:14.122850] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.914 [2024-10-01 12:40:14.207722] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.914 [2024-10-01 12:40:14.207968] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:20:31.914 12:40:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:31.914 12:40:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:31.914 12:40:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.914 12:40:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:31.914 12:40:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:31.914 12:40:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:31.914 12:40:14 -- bdev/bdev_raid.sh@287 -- # killprocess 118014 00:20:31.914 12:40:14 -- common/autotest_common.sh@926 -- # '[' -z 118014 ']' 00:20:31.914 12:40:14 -- common/autotest_common.sh@930 -- # kill -0 118014 00:20:31.914 12:40:14 -- common/autotest_common.sh@931 -- # uname 00:20:31.914 12:40:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:31.914 12:40:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118014 00:20:31.914 12:40:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:31.914 12:40:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:31.914 12:40:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118014' 00:20:31.914 killing process with pid 118014 00:20:31.914 12:40:14 -- common/autotest_common.sh@945 -- # kill 118014 00:20:31.914 [2024-10-01 12:40:14.440848] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:31.914 12:40:14 -- common/autotest_common.sh@950 -- # wait 118014 00:20:31.914 [2024-10-01 12:40:14.441117] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:33.291 00:20:33.291 real 0m11.205s 00:20:33.291 user 0m18.653s 00:20:33.291 sys 0m1.882s 00:20:33.291 12:40:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:33.291 12:40:15 -- common/autotest_common.sh@10 -- # set +x 00:20:33.291 ************************************ 00:20:33.291 END TEST raid_state_function_test_sb 00:20:33.291 ************************************ 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:20:33.291 12:40:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:33.291 12:40:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:33.291 12:40:15 -- common/autotest_common.sh@10 -- # set +x 00:20:33.291 ************************************ 00:20:33.291 START TEST raid_superblock_test 00:20:33.291 ************************************ 00:20:33.291 12:40:15 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=118387 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:33.291 12:40:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 118387 /var/tmp/spdk-raid.sock 00:20:33.291 12:40:15 -- common/autotest_common.sh@819 -- # '[' -z 118387 ']' 00:20:33.291 12:40:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:33.291 12:40:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:33.291 12:40:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:33.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:33.291 12:40:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:33.291 12:40:15 -- common/autotest_common.sh@10 -- # set +x 00:20:33.291 [2024-10-01 12:40:15.818779] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:33.291 [2024-10-01 12:40:15.819113] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118387 ] 00:20:33.551 [2024-10-01 12:40:15.987907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.810 [2024-10-01 12:40:16.174501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.070 [2024-10-01 12:40:16.354267] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.070 12:40:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:34.070 12:40:16 -- common/autotest_common.sh@852 -- # return 0 00:20:34.070 12:40:16 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:34.070 12:40:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:34.070 12:40:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:34.070 12:40:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:34.070 12:40:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:34.070 12:40:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:34.070 12:40:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:34.070 12:40:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:34.070 12:40:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:34.329 malloc1 00:20:34.329 12:40:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:34.588 [2024-10-01 12:40:16.970030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:34.588 [2024-10-01 12:40:16.970272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.588 [2024-10-01 12:40:16.970342] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:34.588 [2024-10-01 12:40:16.970473] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.588 [2024-10-01 12:40:16.973005] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.588 [2024-10-01 12:40:16.973166] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:34.588 pt1 00:20:34.588 12:40:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:34.588 12:40:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:34.588 12:40:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:34.588 12:40:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:34.588 12:40:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:34.588 12:40:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:34.588 12:40:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:34.588 12:40:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:34.588 12:40:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:34.848 malloc2 00:20:34.848 12:40:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:34.848 [2024-10-01 12:40:17.372808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:34.848 [2024-10-01 12:40:17.373016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.848 [2024-10-01 12:40:17.373092] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:34.848 [2024-10-01 12:40:17.373221] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.848 [2024-10-01 12:40:17.375608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.848 [2024-10-01 12:40:17.375775] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:34.848 pt2 00:20:35.108 12:40:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:35.108 12:40:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:35.108 12:40:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:35.108 12:40:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:35.108 12:40:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:35.108 12:40:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:35.108 12:40:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:35.108 12:40:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:35.108 12:40:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:35.108 malloc3 00:20:35.108 12:40:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:35.367 [2024-10-01 12:40:17.759991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:35.367 [2024-10-01 12:40:17.760194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.367 [2024-10-01 12:40:17.760277] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:35.367 [2024-10-01 12:40:17.760394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.367 [2024-10-01 12:40:17.762762] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.367 [2024-10-01 12:40:17.762925] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:35.367 pt3 00:20:35.367 12:40:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:35.367 12:40:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:35.367 12:40:17 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:35.628 [2024-10-01 12:40:17.911825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:35.628 [2024-10-01 12:40:17.913937] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:35.628 [2024-10-01 12:40:17.914095] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:35.628 [2024-10-01 12:40:17.914281] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:20:35.628 [2024-10-01 12:40:17.914373] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:35.628 [2024-10-01 12:40:17.914519] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:35.628 [2024-10-01 12:40:17.914878] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:20:35.628 [2024-10-01 12:40:17.914917] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:20:35.628 [2024-10-01 12:40:17.915226] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.628 12:40:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.628 12:40:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:35.628 "name": "raid_bdev1", 00:20:35.628 "uuid": "978c8e6a-5204-4c0e-b0f2-b9f8df0551f7", 00:20:35.628 "strip_size_kb": 0, 00:20:35.628 "state": "online", 00:20:35.628 "raid_level": "raid1", 00:20:35.628 "superblock": true, 00:20:35.628 "num_base_bdevs": 3, 00:20:35.628 "num_base_bdevs_discovered": 3, 00:20:35.628 "num_base_bdevs_operational": 3, 00:20:35.628 "base_bdevs_list": [ 00:20:35.628 { 00:20:35.628 "name": "pt1", 00:20:35.628 "uuid": "39424da2-a7e4-5000-93b3-1fa77c7e42b5", 00:20:35.628 "is_configured": true, 00:20:35.628 "data_offset": 2048, 00:20:35.628 "data_size": 63488 00:20:35.628 }, 00:20:35.628 { 00:20:35.628 "name": "pt2", 00:20:35.628 "uuid": "d1b06e8a-59f3-5b6e-8b8a-c99d99250935", 00:20:35.628 "is_configured": true, 00:20:35.628 "data_offset": 2048, 00:20:35.628 "data_size": 63488 00:20:35.628 }, 00:20:35.628 { 00:20:35.628 "name": "pt3", 00:20:35.628 "uuid": "c6d02415-22cb-5b0e-b4aa-9ad26cc1d20f", 00:20:35.628 "is_configured": true, 00:20:35.628 "data_offset": 2048, 00:20:35.628 "data_size": 63488 00:20:35.628 } 00:20:35.628 ] 00:20:35.628 }' 00:20:35.628 12:40:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:35.628 12:40:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.197 12:40:18 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:36.197 12:40:18 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:36.457 [2024-10-01 12:40:18.770688] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.457 12:40:18 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=978c8e6a-5204-4c0e-b0f2-b9f8df0551f7 00:20:36.457 12:40:18 -- bdev/bdev_raid.sh@380 -- # '[' -z 978c8e6a-5204-4c0e-b0f2-b9f8df0551f7 ']' 00:20:36.457 12:40:18 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:36.457 [2024-10-01 12:40:18.954262] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:36.457 [2024-10-01 12:40:18.954389] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:36.457 [2024-10-01 12:40:18.954597] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.457 [2024-10-01 12:40:18.954702] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:36.457 [2024-10-01 12:40:18.954917] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:20:36.457 12:40:18 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.457 12:40:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:36.720 12:40:19 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:36.720 12:40:19 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:36.720 12:40:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.720 12:40:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:36.979 12:40:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.979 12:40:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:36.979 12:40:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.979 12:40:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:37.238 12:40:19 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:37.238 12:40:19 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:37.498 12:40:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:37.498 12:40:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:37.498 12:40:19 -- common/autotest_common.sh@640 -- # local es=0 00:20:37.498 12:40:19 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:37.498 12:40:19 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.498 12:40:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.498 12:40:19 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.498 12:40:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.498 12:40:19 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.498 12:40:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.498 12:40:19 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.498 12:40:19 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:37.498 12:40:19 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:37.498 [2024-10-01 12:40:19.952810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:37.498 [2024-10-01 12:40:19.954957] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:37.498 [2024-10-01 12:40:19.955134] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:37.498 [2024-10-01 12:40:19.955214] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:37.498 [2024-10-01 12:40:19.955398] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:37.498 [2024-10-01 12:40:19.955514] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:37.498 [2024-10-01 12:40:19.955592] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.498 [2024-10-01 12:40:19.955691] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:20:37.498 request: 00:20:37.498 { 00:20:37.498 "name": "raid_bdev1", 00:20:37.498 "raid_level": "raid1", 00:20:37.498 "base_bdevs": [ 00:20:37.498 "malloc1", 00:20:37.498 "malloc2", 00:20:37.498 "malloc3" 00:20:37.498 ], 00:20:37.498 "superblock": false, 00:20:37.498 "method": "bdev_raid_create", 00:20:37.498 "req_id": 1 00:20:37.498 } 00:20:37.498 Got JSON-RPC error response 00:20:37.498 response: 00:20:37.498 { 00:20:37.498 "code": -17, 00:20:37.498 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:37.498 } 00:20:37.498 12:40:19 -- common/autotest_common.sh@643 -- # es=1 00:20:37.498 12:40:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:37.498 12:40:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:37.498 12:40:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:37.498 12:40:19 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.498 12:40:19 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:37.757 12:40:20 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:37.757 12:40:20 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:37.757 12:40:20 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:38.016 [2024-10-01 12:40:20.332217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:38.016 [2024-10-01 12:40:20.332407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.016 [2024-10-01 12:40:20.332492] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:38.016 [2024-10-01 12:40:20.332628] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.016 [2024-10-01 12:40:20.335160] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.016 [2024-10-01 12:40:20.335302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:38.016 [2024-10-01 12:40:20.335506] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:38.016 [2024-10-01 12:40:20.335583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:38.016 pt1 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:38.016 "name": "raid_bdev1", 00:20:38.016 "uuid": "978c8e6a-5204-4c0e-b0f2-b9f8df0551f7", 00:20:38.016 "strip_size_kb": 0, 00:20:38.016 "state": "configuring", 00:20:38.016 "raid_level": "raid1", 00:20:38.016 "superblock": true, 00:20:38.016 "num_base_bdevs": 3, 00:20:38.016 "num_base_bdevs_discovered": 1, 00:20:38.016 "num_base_bdevs_operational": 3, 00:20:38.016 "base_bdevs_list": [ 00:20:38.016 { 00:20:38.016 "name": "pt1", 00:20:38.016 "uuid": "39424da2-a7e4-5000-93b3-1fa77c7e42b5", 00:20:38.016 "is_configured": true, 00:20:38.016 "data_offset": 2048, 00:20:38.016 "data_size": 63488 00:20:38.016 }, 00:20:38.016 { 00:20:38.016 "name": null, 00:20:38.016 "uuid": "d1b06e8a-59f3-5b6e-8b8a-c99d99250935", 00:20:38.016 "is_configured": false, 00:20:38.016 "data_offset": 2048, 00:20:38.016 "data_size": 63488 00:20:38.016 }, 00:20:38.016 { 00:20:38.016 "name": null, 00:20:38.016 "uuid": "c6d02415-22cb-5b0e-b4aa-9ad26cc1d20f", 00:20:38.016 "is_configured": false, 00:20:38.016 "data_offset": 2048, 00:20:38.016 "data_size": 63488 00:20:38.016 } 00:20:38.016 ] 00:20:38.016 }' 00:20:38.016 12:40:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:38.016 12:40:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.584 12:40:21 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:20:38.584 12:40:21 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:38.842 [2024-10-01 12:40:21.191015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:38.842 [2024-10-01 12:40:21.191206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.842 [2024-10-01 12:40:21.191279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:38.842 [2024-10-01 12:40:21.191384] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.843 [2024-10-01 12:40:21.191808] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.843 [2024-10-01 12:40:21.191961] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:38.843 [2024-10-01 12:40:21.192154] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:38.843 [2024-10-01 12:40:21.192341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:38.843 pt2 00:20:38.843 12:40:21 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:38.843 [2024-10-01 12:40:21.366781] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:39.102 "name": "raid_bdev1", 00:20:39.102 "uuid": "978c8e6a-5204-4c0e-b0f2-b9f8df0551f7", 00:20:39.102 "strip_size_kb": 0, 00:20:39.102 "state": "configuring", 00:20:39.102 "raid_level": "raid1", 00:20:39.102 "superblock": true, 00:20:39.102 "num_base_bdevs": 3, 00:20:39.102 "num_base_bdevs_discovered": 1, 00:20:39.102 "num_base_bdevs_operational": 3, 00:20:39.102 "base_bdevs_list": [ 00:20:39.102 { 00:20:39.102 "name": "pt1", 00:20:39.102 "uuid": "39424da2-a7e4-5000-93b3-1fa77c7e42b5", 00:20:39.102 "is_configured": true, 00:20:39.102 "data_offset": 2048, 00:20:39.102 "data_size": 63488 00:20:39.102 }, 00:20:39.102 { 00:20:39.102 "name": null, 00:20:39.102 "uuid": "d1b06e8a-59f3-5b6e-8b8a-c99d99250935", 00:20:39.102 "is_configured": false, 00:20:39.102 "data_offset": 2048, 00:20:39.102 "data_size": 63488 00:20:39.102 }, 00:20:39.102 { 00:20:39.102 "name": null, 00:20:39.102 "uuid": "c6d02415-22cb-5b0e-b4aa-9ad26cc1d20f", 00:20:39.102 "is_configured": false, 00:20:39.102 "data_offset": 2048, 00:20:39.102 "data_size": 63488 00:20:39.102 } 00:20:39.102 ] 00:20:39.102 }' 00:20:39.102 12:40:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:39.102 12:40:21 -- common/autotest_common.sh@10 -- # set +x 00:20:39.671 12:40:22 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:39.671 12:40:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:39.671 12:40:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:39.671 [2024-10-01 12:40:22.205530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:39.671 [2024-10-01 12:40:22.205727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.671 [2024-10-01 12:40:22.205790] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:39.930 [2024-10-01 12:40:22.205883] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.930 [2024-10-01 12:40:22.206270] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.930 [2024-10-01 12:40:22.206398] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:39.930 [2024-10-01 12:40:22.206583] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:39.930 [2024-10-01 12:40:22.206674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:39.930 pt2 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:39.930 [2024-10-01 12:40:22.377287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:39.930 [2024-10-01 12:40:22.377470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.930 [2024-10-01 12:40:22.377529] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:39.930 [2024-10-01 12:40:22.377652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.930 [2024-10-01 12:40:22.378023] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.930 [2024-10-01 12:40:22.378217] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:39.930 [2024-10-01 12:40:22.378426] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:39.930 [2024-10-01 12:40:22.378509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:39.930 [2024-10-01 12:40:22.378636] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:20:39.930 [2024-10-01 12:40:22.378717] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:39.930 [2024-10-01 12:40:22.378831] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:39.930 [2024-10-01 12:40:22.379150] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:20:39.930 [2024-10-01 12:40:22.379280] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:20:39.930 [2024-10-01 12:40:22.379510] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.930 pt3 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.930 12:40:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.931 12:40:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.931 12:40:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.190 12:40:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.190 "name": "raid_bdev1", 00:20:40.190 "uuid": "978c8e6a-5204-4c0e-b0f2-b9f8df0551f7", 00:20:40.190 "strip_size_kb": 0, 00:20:40.190 "state": "online", 00:20:40.190 "raid_level": "raid1", 00:20:40.190 "superblock": true, 00:20:40.190 "num_base_bdevs": 3, 00:20:40.190 "num_base_bdevs_discovered": 3, 00:20:40.190 "num_base_bdevs_operational": 3, 00:20:40.190 "base_bdevs_list": [ 00:20:40.190 { 00:20:40.190 "name": "pt1", 00:20:40.190 "uuid": "39424da2-a7e4-5000-93b3-1fa77c7e42b5", 00:20:40.190 "is_configured": true, 00:20:40.190 "data_offset": 2048, 00:20:40.190 "data_size": 63488 00:20:40.190 }, 00:20:40.190 { 00:20:40.190 "name": "pt2", 00:20:40.190 "uuid": "d1b06e8a-59f3-5b6e-8b8a-c99d99250935", 00:20:40.190 "is_configured": true, 00:20:40.190 "data_offset": 2048, 00:20:40.190 "data_size": 63488 00:20:40.190 }, 00:20:40.190 { 00:20:40.190 "name": "pt3", 00:20:40.190 "uuid": "c6d02415-22cb-5b0e-b4aa-9ad26cc1d20f", 00:20:40.190 "is_configured": true, 00:20:40.190 "data_offset": 2048, 00:20:40.190 "data_size": 63488 00:20:40.190 } 00:20:40.190 ] 00:20:40.190 }' 00:20:40.190 12:40:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.190 12:40:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.758 12:40:23 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:40.758 12:40:23 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:40.759 [2024-10-01 12:40:23.204306] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.759 12:40:23 -- bdev/bdev_raid.sh@430 -- # '[' 978c8e6a-5204-4c0e-b0f2-b9f8df0551f7 '!=' 978c8e6a-5204-4c0e-b0f2-b9f8df0551f7 ']' 00:20:40.759 12:40:23 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:20:40.759 12:40:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:40.759 12:40:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:40.759 12:40:23 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:41.018 [2024-10-01 12:40:23.383922] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.018 12:40:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.277 12:40:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:41.277 "name": "raid_bdev1", 00:20:41.277 "uuid": "978c8e6a-5204-4c0e-b0f2-b9f8df0551f7", 00:20:41.277 "strip_size_kb": 0, 00:20:41.277 "state": "online", 00:20:41.277 "raid_level": "raid1", 00:20:41.277 "superblock": true, 00:20:41.277 "num_base_bdevs": 3, 00:20:41.277 "num_base_bdevs_discovered": 2, 00:20:41.277 "num_base_bdevs_operational": 2, 00:20:41.277 "base_bdevs_list": [ 00:20:41.277 { 00:20:41.277 "name": null, 00:20:41.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.277 "is_configured": false, 00:20:41.277 "data_offset": 2048, 00:20:41.277 "data_size": 63488 00:20:41.277 }, 00:20:41.277 { 00:20:41.277 "name": "pt2", 00:20:41.277 "uuid": "d1b06e8a-59f3-5b6e-8b8a-c99d99250935", 00:20:41.277 "is_configured": true, 00:20:41.277 "data_offset": 2048, 00:20:41.277 "data_size": 63488 00:20:41.277 }, 00:20:41.277 { 00:20:41.277 "name": "pt3", 00:20:41.278 "uuid": "c6d02415-22cb-5b0e-b4aa-9ad26cc1d20f", 00:20:41.278 "is_configured": true, 00:20:41.278 "data_offset": 2048, 00:20:41.278 "data_size": 63488 00:20:41.278 } 00:20:41.278 ] 00:20:41.278 }' 00:20:41.278 12:40:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:41.278 12:40:23 -- common/autotest_common.sh@10 -- # set +x 00:20:41.537 12:40:24 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:41.796 [2024-10-01 12:40:24.226648] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:41.796 [2024-10-01 12:40:24.226786] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:41.796 [2024-10-01 12:40:24.226947] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.796 [2024-10-01 12:40:24.227028] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.796 [2024-10-01 12:40:24.227059] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:20:41.796 12:40:24 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:20:41.796 12:40:24 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.055 12:40:24 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:20:42.055 12:40:24 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:20:42.055 12:40:24 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:20:42.055 12:40:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:42.055 12:40:24 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:42.315 12:40:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:42.315 12:40:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:42.315 12:40:24 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:42.315 12:40:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:42.315 12:40:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:42.315 12:40:24 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:20:42.315 12:40:24 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:42.315 12:40:24 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:42.573 [2024-10-01 12:40:24.945587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:42.573 [2024-10-01 12:40:24.945763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.573 [2024-10-01 12:40:24.945828] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:42.573 [2024-10-01 12:40:24.945958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.573 [2024-10-01 12:40:24.948377] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.573 [2024-10-01 12:40:24.948527] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:42.573 [2024-10-01 12:40:24.948706] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:42.573 [2024-10-01 12:40:24.948771] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:42.573 pt2 00:20:42.573 12:40:24 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:42.573 12:40:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:42.573 12:40:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:42.573 12:40:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:42.573 12:40:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:42.574 12:40:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:42.574 12:40:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:42.574 12:40:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:42.574 12:40:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:42.574 12:40:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:42.574 12:40:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.574 12:40:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.832 12:40:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:42.832 "name": "raid_bdev1", 00:20:42.832 "uuid": "978c8e6a-5204-4c0e-b0f2-b9f8df0551f7", 00:20:42.832 "strip_size_kb": 0, 00:20:42.832 "state": "configuring", 00:20:42.832 "raid_level": "raid1", 00:20:42.832 "superblock": true, 00:20:42.832 "num_base_bdevs": 3, 00:20:42.832 "num_base_bdevs_discovered": 1, 00:20:42.832 "num_base_bdevs_operational": 2, 00:20:42.832 "base_bdevs_list": [ 00:20:42.832 { 00:20:42.832 "name": null, 00:20:42.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.832 "is_configured": false, 00:20:42.832 "data_offset": 2048, 00:20:42.832 "data_size": 63488 00:20:42.832 }, 00:20:42.832 { 00:20:42.832 "name": "pt2", 00:20:42.832 "uuid": "d1b06e8a-59f3-5b6e-8b8a-c99d99250935", 00:20:42.832 "is_configured": true, 00:20:42.832 "data_offset": 2048, 00:20:42.832 "data_size": 63488 00:20:42.832 }, 00:20:42.832 { 00:20:42.832 "name": null, 00:20:42.832 "uuid": "c6d02415-22cb-5b0e-b4aa-9ad26cc1d20f", 00:20:42.832 "is_configured": false, 00:20:42.832 "data_offset": 2048, 00:20:42.832 "data_size": 63488 00:20:42.832 } 00:20:42.832 ] 00:20:42.832 }' 00:20:42.832 12:40:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:42.832 12:40:25 -- common/autotest_common.sh@10 -- # set +x 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@462 -- # i=2 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:43.400 [2024-10-01 12:40:25.816371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:43.400 [2024-10-01 12:40:25.816559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.400 [2024-10-01 12:40:25.816633] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:43.400 [2024-10-01 12:40:25.816738] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.400 [2024-10-01 12:40:25.817169] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.400 [2024-10-01 12:40:25.817298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:43.400 [2024-10-01 12:40:25.817474] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:43.400 [2024-10-01 12:40:25.817571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:43.400 [2024-10-01 12:40:25.817701] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:20:43.400 [2024-10-01 12:40:25.817788] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:43.400 [2024-10-01 12:40:25.817905] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:43.400 [2024-10-01 12:40:25.818242] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:20:43.400 [2024-10-01 12:40:25.818366] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:20:43.400 [2024-10-01 12:40:25.818564] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.400 pt3 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.400 12:40:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.659 12:40:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:43.659 "name": "raid_bdev1", 00:20:43.659 "uuid": "978c8e6a-5204-4c0e-b0f2-b9f8df0551f7", 00:20:43.659 "strip_size_kb": 0, 00:20:43.659 "state": "online", 00:20:43.659 "raid_level": "raid1", 00:20:43.659 "superblock": true, 00:20:43.659 "num_base_bdevs": 3, 00:20:43.659 "num_base_bdevs_discovered": 2, 00:20:43.659 "num_base_bdevs_operational": 2, 00:20:43.659 "base_bdevs_list": [ 00:20:43.659 { 00:20:43.659 "name": null, 00:20:43.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.659 "is_configured": false, 00:20:43.659 "data_offset": 2048, 00:20:43.659 "data_size": 63488 00:20:43.659 }, 00:20:43.659 { 00:20:43.659 "name": "pt2", 00:20:43.659 "uuid": "d1b06e8a-59f3-5b6e-8b8a-c99d99250935", 00:20:43.659 "is_configured": true, 00:20:43.659 "data_offset": 2048, 00:20:43.659 "data_size": 63488 00:20:43.659 }, 00:20:43.659 { 00:20:43.659 "name": "pt3", 00:20:43.659 "uuid": "c6d02415-22cb-5b0e-b4aa-9ad26cc1d20f", 00:20:43.659 "is_configured": true, 00:20:43.659 "data_offset": 2048, 00:20:43.660 "data_size": 63488 00:20:43.660 } 00:20:43.660 ] 00:20:43.660 }' 00:20:43.660 12:40:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:43.660 12:40:26 -- common/autotest_common.sh@10 -- # set +x 00:20:44.228 12:40:26 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:20:44.228 12:40:26 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:44.228 [2024-10-01 12:40:26.683095] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:44.228 [2024-10-01 12:40:26.683219] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:44.228 [2024-10-01 12:40:26.683422] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.228 [2024-10-01 12:40:26.683509] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.228 [2024-10-01 12:40:26.683697] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:20:44.228 12:40:26 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.228 12:40:26 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:44.487 12:40:26 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:44.487 12:40:26 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:44.487 12:40:26 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:44.746 [2024-10-01 12:40:27.050562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:44.746 [2024-10-01 12:40:27.050758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.746 [2024-10-01 12:40:27.050829] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:44.746 [2024-10-01 12:40:27.050935] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.746 [2024-10-01 12:40:27.053196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.746 [2024-10-01 12:40:27.053358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:44.746 [2024-10-01 12:40:27.053569] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:44.746 [2024-10-01 12:40:27.053723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:44.746 pt1 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.746 "name": "raid_bdev1", 00:20:44.746 "uuid": "978c8e6a-5204-4c0e-b0f2-b9f8df0551f7", 00:20:44.746 "strip_size_kb": 0, 00:20:44.746 "state": "configuring", 00:20:44.746 "raid_level": "raid1", 00:20:44.746 "superblock": true, 00:20:44.746 "num_base_bdevs": 3, 00:20:44.746 "num_base_bdevs_discovered": 1, 00:20:44.746 "num_base_bdevs_operational": 3, 00:20:44.746 "base_bdevs_list": [ 00:20:44.746 { 00:20:44.746 "name": "pt1", 00:20:44.746 "uuid": "39424da2-a7e4-5000-93b3-1fa77c7e42b5", 00:20:44.746 "is_configured": true, 00:20:44.746 "data_offset": 2048, 00:20:44.746 "data_size": 63488 00:20:44.746 }, 00:20:44.746 { 00:20:44.746 "name": null, 00:20:44.746 "uuid": "d1b06e8a-59f3-5b6e-8b8a-c99d99250935", 00:20:44.746 "is_configured": false, 00:20:44.746 "data_offset": 2048, 00:20:44.746 "data_size": 63488 00:20:44.746 }, 00:20:44.746 { 00:20:44.746 "name": null, 00:20:44.746 "uuid": "c6d02415-22cb-5b0e-b4aa-9ad26cc1d20f", 00:20:44.746 "is_configured": false, 00:20:44.746 "data_offset": 2048, 00:20:44.746 "data_size": 63488 00:20:44.746 } 00:20:44.746 ] 00:20:44.746 }' 00:20:44.746 12:40:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.746 12:40:27 -- common/autotest_common.sh@10 -- # set +x 00:20:45.313 12:40:27 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:45.313 12:40:27 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:45.313 12:40:27 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:45.572 12:40:28 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:45.572 12:40:28 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:45.572 12:40:28 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@489 -- # i=2 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:45.830 [2024-10-01 12:40:28.332092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:45.830 [2024-10-01 12:40:28.332369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.830 [2024-10-01 12:40:28.332439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:45.830 [2024-10-01 12:40:28.332541] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.830 [2024-10-01 12:40:28.333060] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.830 [2024-10-01 12:40:28.333196] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:45.830 [2024-10-01 12:40:28.333399] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:45.830 [2024-10-01 12:40:28.333501] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:45.830 [2024-10-01 12:40:28.333569] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.830 [2024-10-01 12:40:28.333616] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:20:45.830 [2024-10-01 12:40:28.333706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:45.830 pt3 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:45.830 12:40:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:46.089 12:40:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.089 12:40:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.089 12:40:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:46.089 "name": "raid_bdev1", 00:20:46.089 "uuid": "978c8e6a-5204-4c0e-b0f2-b9f8df0551f7", 00:20:46.089 "strip_size_kb": 0, 00:20:46.089 "state": "configuring", 00:20:46.089 "raid_level": "raid1", 00:20:46.089 "superblock": true, 00:20:46.089 "num_base_bdevs": 3, 00:20:46.089 "num_base_bdevs_discovered": 1, 00:20:46.089 "num_base_bdevs_operational": 2, 00:20:46.089 "base_bdevs_list": [ 00:20:46.089 { 00:20:46.089 "name": null, 00:20:46.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.089 "is_configured": false, 00:20:46.089 "data_offset": 2048, 00:20:46.089 "data_size": 63488 00:20:46.089 }, 00:20:46.089 { 00:20:46.089 "name": null, 00:20:46.089 "uuid": "d1b06e8a-59f3-5b6e-8b8a-c99d99250935", 00:20:46.089 "is_configured": false, 00:20:46.089 "data_offset": 2048, 00:20:46.089 "data_size": 63488 00:20:46.089 }, 00:20:46.089 { 00:20:46.089 "name": "pt3", 00:20:46.089 "uuid": "c6d02415-22cb-5b0e-b4aa-9ad26cc1d20f", 00:20:46.089 "is_configured": true, 00:20:46.089 "data_offset": 2048, 00:20:46.089 "data_size": 63488 00:20:46.089 } 00:20:46.089 ] 00:20:46.089 }' 00:20:46.089 12:40:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:46.089 12:40:28 -- common/autotest_common.sh@10 -- # set +x 00:20:46.657 12:40:29 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:46.657 12:40:29 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:46.657 12:40:29 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:46.918 [2024-10-01 12:40:29.246728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:46.918 [2024-10-01 12:40:29.246949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.918 [2024-10-01 12:40:29.247027] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:46.918 [2024-10-01 12:40:29.247124] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.918 [2024-10-01 12:40:29.247572] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.918 [2024-10-01 12:40:29.247715] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:46.918 [2024-10-01 12:40:29.247896] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:46.918 [2024-10-01 12:40:29.248000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:46.918 [2024-10-01 12:40:29.248148] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:20:46.918 [2024-10-01 12:40:29.248232] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:46.918 [2024-10-01 12:40:29.248381] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:20:46.918 [2024-10-01 12:40:29.248766] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:20:46.919 [2024-10-01 12:40:29.248869] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:20:46.919 [2024-10-01 12:40:29.249061] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.919 pt2 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:46.919 "name": "raid_bdev1", 00:20:46.919 "uuid": "978c8e6a-5204-4c0e-b0f2-b9f8df0551f7", 00:20:46.919 "strip_size_kb": 0, 00:20:46.919 "state": "online", 00:20:46.919 "raid_level": "raid1", 00:20:46.919 "superblock": true, 00:20:46.919 "num_base_bdevs": 3, 00:20:46.919 "num_base_bdevs_discovered": 2, 00:20:46.919 "num_base_bdevs_operational": 2, 00:20:46.919 "base_bdevs_list": [ 00:20:46.919 { 00:20:46.919 "name": null, 00:20:46.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.919 "is_configured": false, 00:20:46.919 "data_offset": 2048, 00:20:46.919 "data_size": 63488 00:20:46.919 }, 00:20:46.919 { 00:20:46.919 "name": "pt2", 00:20:46.919 "uuid": "d1b06e8a-59f3-5b6e-8b8a-c99d99250935", 00:20:46.919 "is_configured": true, 00:20:46.919 "data_offset": 2048, 00:20:46.919 "data_size": 63488 00:20:46.919 }, 00:20:46.919 { 00:20:46.919 "name": "pt3", 00:20:46.919 "uuid": "c6d02415-22cb-5b0e-b4aa-9ad26cc1d20f", 00:20:46.919 "is_configured": true, 00:20:46.919 "data_offset": 2048, 00:20:46.919 "data_size": 63488 00:20:46.919 } 00:20:46.919 ] 00:20:46.919 }' 00:20:46.919 12:40:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:46.919 12:40:29 -- common/autotest_common.sh@10 -- # set +x 00:20:47.486 12:40:29 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:47.486 12:40:29 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:47.746 [2024-10-01 12:40:30.117663] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.746 12:40:30 -- bdev/bdev_raid.sh@506 -- # '[' 978c8e6a-5204-4c0e-b0f2-b9f8df0551f7 '!=' 978c8e6a-5204-4c0e-b0f2-b9f8df0551f7 ']' 00:20:47.746 12:40:30 -- bdev/bdev_raid.sh@511 -- # killprocess 118387 00:20:47.746 12:40:30 -- common/autotest_common.sh@926 -- # '[' -z 118387 ']' 00:20:47.746 12:40:30 -- common/autotest_common.sh@930 -- # kill -0 118387 00:20:47.746 12:40:30 -- common/autotest_common.sh@931 -- # uname 00:20:47.746 12:40:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:47.746 12:40:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118387 00:20:47.746 killing process with pid 118387 00:20:47.746 12:40:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:47.746 12:40:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:47.746 12:40:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118387' 00:20:47.746 12:40:30 -- common/autotest_common.sh@945 -- # kill 118387 00:20:47.746 [2024-10-01 12:40:30.183640] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:47.746 12:40:30 -- common/autotest_common.sh@950 -- # wait 118387 00:20:47.746 [2024-10-01 12:40:30.183711] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:47.746 [2024-10-01 12:40:30.183764] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:47.746 [2024-10-01 12:40:30.183773] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:20:48.005 [2024-10-01 12:40:30.415916] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:48.970 ************************************ 00:20:48.970 END TEST raid_superblock_test 00:20:48.970 ************************************ 00:20:48.970 12:40:31 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:48.970 00:20:48.970 real 0m15.728s 00:20:48.970 user 0m27.571s 00:20:48.970 sys 0m2.661s 00:20:48.970 12:40:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:48.970 12:40:31 -- common/autotest_common.sh@10 -- # set +x 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:20:49.230 12:40:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:20:49.230 12:40:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:49.230 12:40:31 -- common/autotest_common.sh@10 -- # set +x 00:20:49.230 ************************************ 00:20:49.230 START TEST raid_state_function_test 00:20:49.230 ************************************ 00:20:49.230 12:40:31 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=118953 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118953' 00:20:49.230 Process raid pid: 118953 00:20:49.230 12:40:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118953 /var/tmp/spdk-raid.sock 00:20:49.230 12:40:31 -- common/autotest_common.sh@819 -- # '[' -z 118953 ']' 00:20:49.230 12:40:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:49.230 12:40:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:49.230 12:40:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:49.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:49.230 12:40:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:49.230 12:40:31 -- common/autotest_common.sh@10 -- # set +x 00:20:49.230 [2024-10-01 12:40:31.651054] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:49.230 [2024-10-01 12:40:31.651440] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.491 [2024-10-01 12:40:31.822090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.491 [2024-10-01 12:40:32.015659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.750 [2024-10-01 12:40:32.200501] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.010 12:40:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:50.010 12:40:32 -- common/autotest_common.sh@852 -- # return 0 00:20:50.010 12:40:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:50.270 [2024-10-01 12:40:32.613611] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:50.270 [2024-10-01 12:40:32.613905] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:50.270 [2024-10-01 12:40:32.613981] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:50.270 [2024-10-01 12:40:32.614035] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:50.270 [2024-10-01 12:40:32.614061] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:50.270 [2024-10-01 12:40:32.614121] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:50.270 [2024-10-01 12:40:32.614396] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:50.270 [2024-10-01 12:40:32.614451] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.270 "name": "Existed_Raid", 00:20:50.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.270 "strip_size_kb": 64, 00:20:50.270 "state": "configuring", 00:20:50.270 "raid_level": "raid0", 00:20:50.270 "superblock": false, 00:20:50.270 "num_base_bdevs": 4, 00:20:50.270 "num_base_bdevs_discovered": 0, 00:20:50.270 "num_base_bdevs_operational": 4, 00:20:50.270 "base_bdevs_list": [ 00:20:50.270 { 00:20:50.270 "name": "BaseBdev1", 00:20:50.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.270 "is_configured": false, 00:20:50.270 "data_offset": 0, 00:20:50.270 "data_size": 0 00:20:50.270 }, 00:20:50.270 { 00:20:50.270 "name": "BaseBdev2", 00:20:50.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.270 "is_configured": false, 00:20:50.270 "data_offset": 0, 00:20:50.270 "data_size": 0 00:20:50.270 }, 00:20:50.270 { 00:20:50.270 "name": "BaseBdev3", 00:20:50.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.270 "is_configured": false, 00:20:50.270 "data_offset": 0, 00:20:50.270 "data_size": 0 00:20:50.270 }, 00:20:50.270 { 00:20:50.270 "name": "BaseBdev4", 00:20:50.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.270 "is_configured": false, 00:20:50.270 "data_offset": 0, 00:20:50.270 "data_size": 0 00:20:50.270 } 00:20:50.270 ] 00:20:50.270 }' 00:20:50.270 12:40:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.270 12:40:32 -- common/autotest_common.sh@10 -- # set +x 00:20:50.838 12:40:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:51.098 [2024-10-01 12:40:33.508252] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:51.098 [2024-10-01 12:40:33.508439] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:20:51.098 12:40:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:51.358 [2024-10-01 12:40:33.668052] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:51.358 [2024-10-01 12:40:33.668237] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:51.358 [2024-10-01 12:40:33.668334] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:51.358 [2024-10-01 12:40:33.668390] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:51.358 [2024-10-01 12:40:33.668428] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:51.358 [2024-10-01 12:40:33.668542] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:51.358 [2024-10-01 12:40:33.668576] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:51.358 [2024-10-01 12:40:33.668618] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:51.358 12:40:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:51.358 [2024-10-01 12:40:33.876437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:51.358 BaseBdev1 00:20:51.619 12:40:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:51.619 12:40:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:51.619 12:40:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:51.619 12:40:33 -- common/autotest_common.sh@889 -- # local i 00:20:51.619 12:40:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:51.619 12:40:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:51.619 12:40:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:51.619 12:40:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:51.878 [ 00:20:51.878 { 00:20:51.878 "name": "BaseBdev1", 00:20:51.878 "aliases": [ 00:20:51.878 "5d45b20b-fbf6-4ef1-819a-67edff1c557b" 00:20:51.878 ], 00:20:51.878 "product_name": "Malloc disk", 00:20:51.878 "block_size": 512, 00:20:51.878 "num_blocks": 65536, 00:20:51.878 "uuid": "5d45b20b-fbf6-4ef1-819a-67edff1c557b", 00:20:51.878 "assigned_rate_limits": { 00:20:51.878 "rw_ios_per_sec": 0, 00:20:51.878 "rw_mbytes_per_sec": 0, 00:20:51.878 "r_mbytes_per_sec": 0, 00:20:51.878 "w_mbytes_per_sec": 0 00:20:51.878 }, 00:20:51.878 "claimed": true, 00:20:51.878 "claim_type": "exclusive_write", 00:20:51.878 "zoned": false, 00:20:51.878 "supported_io_types": { 00:20:51.878 "read": true, 00:20:51.878 "write": true, 00:20:51.878 "unmap": true, 00:20:51.878 "write_zeroes": true, 00:20:51.878 "flush": true, 00:20:51.878 "reset": true, 00:20:51.878 "compare": false, 00:20:51.878 "compare_and_write": false, 00:20:51.878 "abort": true, 00:20:51.878 "nvme_admin": false, 00:20:51.878 "nvme_io": false 00:20:51.878 }, 00:20:51.878 "memory_domains": [ 00:20:51.878 { 00:20:51.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.878 "dma_device_type": 2 00:20:51.878 } 00:20:51.878 ], 00:20:51.878 "driver_specific": {} 00:20:51.878 } 00:20:51.878 ] 00:20:51.878 12:40:34 -- common/autotest_common.sh@895 -- # return 0 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.878 12:40:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.139 12:40:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:52.139 "name": "Existed_Raid", 00:20:52.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.139 "strip_size_kb": 64, 00:20:52.139 "state": "configuring", 00:20:52.139 "raid_level": "raid0", 00:20:52.139 "superblock": false, 00:20:52.139 "num_base_bdevs": 4, 00:20:52.139 "num_base_bdevs_discovered": 1, 00:20:52.139 "num_base_bdevs_operational": 4, 00:20:52.139 "base_bdevs_list": [ 00:20:52.139 { 00:20:52.139 "name": "BaseBdev1", 00:20:52.139 "uuid": "5d45b20b-fbf6-4ef1-819a-67edff1c557b", 00:20:52.139 "is_configured": true, 00:20:52.139 "data_offset": 0, 00:20:52.139 "data_size": 65536 00:20:52.139 }, 00:20:52.139 { 00:20:52.139 "name": "BaseBdev2", 00:20:52.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.139 "is_configured": false, 00:20:52.139 "data_offset": 0, 00:20:52.139 "data_size": 0 00:20:52.139 }, 00:20:52.139 { 00:20:52.139 "name": "BaseBdev3", 00:20:52.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.139 "is_configured": false, 00:20:52.139 "data_offset": 0, 00:20:52.139 "data_size": 0 00:20:52.139 }, 00:20:52.139 { 00:20:52.139 "name": "BaseBdev4", 00:20:52.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.139 "is_configured": false, 00:20:52.139 "data_offset": 0, 00:20:52.139 "data_size": 0 00:20:52.139 } 00:20:52.139 ] 00:20:52.139 }' 00:20:52.139 12:40:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:52.139 12:40:34 -- common/autotest_common.sh@10 -- # set +x 00:20:52.708 12:40:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:52.708 [2024-10-01 12:40:35.130610] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:52.708 [2024-10-01 12:40:35.130759] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:52.708 12:40:35 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:20:52.708 12:40:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:52.969 [2024-10-01 12:40:35.318400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:52.969 [2024-10-01 12:40:35.320661] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:52.969 [2024-10-01 12:40:35.320854] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:52.969 [2024-10-01 12:40:35.320939] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:52.969 [2024-10-01 12:40:35.321000] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:52.969 [2024-10-01 12:40:35.321028] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:52.969 [2024-10-01 12:40:35.321071] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.969 12:40:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.228 12:40:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:53.229 "name": "Existed_Raid", 00:20:53.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.229 "strip_size_kb": 64, 00:20:53.229 "state": "configuring", 00:20:53.229 "raid_level": "raid0", 00:20:53.229 "superblock": false, 00:20:53.229 "num_base_bdevs": 4, 00:20:53.229 "num_base_bdevs_discovered": 1, 00:20:53.229 "num_base_bdevs_operational": 4, 00:20:53.229 "base_bdevs_list": [ 00:20:53.229 { 00:20:53.229 "name": "BaseBdev1", 00:20:53.229 "uuid": "5d45b20b-fbf6-4ef1-819a-67edff1c557b", 00:20:53.229 "is_configured": true, 00:20:53.229 "data_offset": 0, 00:20:53.229 "data_size": 65536 00:20:53.229 }, 00:20:53.229 { 00:20:53.229 "name": "BaseBdev2", 00:20:53.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.229 "is_configured": false, 00:20:53.229 "data_offset": 0, 00:20:53.229 "data_size": 0 00:20:53.229 }, 00:20:53.229 { 00:20:53.229 "name": "BaseBdev3", 00:20:53.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.229 "is_configured": false, 00:20:53.229 "data_offset": 0, 00:20:53.229 "data_size": 0 00:20:53.229 }, 00:20:53.229 { 00:20:53.229 "name": "BaseBdev4", 00:20:53.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.229 "is_configured": false, 00:20:53.229 "data_offset": 0, 00:20:53.229 "data_size": 0 00:20:53.229 } 00:20:53.229 ] 00:20:53.229 }' 00:20:53.229 12:40:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:53.229 12:40:35 -- common/autotest_common.sh@10 -- # set +x 00:20:53.799 12:40:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:53.799 [2024-10-01 12:40:36.264035] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:53.799 BaseBdev2 00:20:53.799 12:40:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:53.799 12:40:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:20:53.799 12:40:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:53.799 12:40:36 -- common/autotest_common.sh@889 -- # local i 00:20:53.799 12:40:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:53.799 12:40:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:53.799 12:40:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.058 12:40:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:54.318 [ 00:20:54.318 { 00:20:54.318 "name": "BaseBdev2", 00:20:54.318 "aliases": [ 00:20:54.318 "20079bde-3f0d-4cd6-a5d9-f4c3ccf4e68d" 00:20:54.318 ], 00:20:54.318 "product_name": "Malloc disk", 00:20:54.318 "block_size": 512, 00:20:54.318 "num_blocks": 65536, 00:20:54.318 "uuid": "20079bde-3f0d-4cd6-a5d9-f4c3ccf4e68d", 00:20:54.318 "assigned_rate_limits": { 00:20:54.318 "rw_ios_per_sec": 0, 00:20:54.318 "rw_mbytes_per_sec": 0, 00:20:54.318 "r_mbytes_per_sec": 0, 00:20:54.318 "w_mbytes_per_sec": 0 00:20:54.318 }, 00:20:54.318 "claimed": true, 00:20:54.318 "claim_type": "exclusive_write", 00:20:54.318 "zoned": false, 00:20:54.318 "supported_io_types": { 00:20:54.318 "read": true, 00:20:54.318 "write": true, 00:20:54.318 "unmap": true, 00:20:54.318 "write_zeroes": true, 00:20:54.318 "flush": true, 00:20:54.318 "reset": true, 00:20:54.318 "compare": false, 00:20:54.318 "compare_and_write": false, 00:20:54.318 "abort": true, 00:20:54.318 "nvme_admin": false, 00:20:54.318 "nvme_io": false 00:20:54.318 }, 00:20:54.318 "memory_domains": [ 00:20:54.318 { 00:20:54.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.318 "dma_device_type": 2 00:20:54.318 } 00:20:54.318 ], 00:20:54.318 "driver_specific": {} 00:20:54.318 } 00:20:54.318 ] 00:20:54.318 12:40:36 -- common/autotest_common.sh@895 -- # return 0 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.318 12:40:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:54.318 "name": "Existed_Raid", 00:20:54.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.318 "strip_size_kb": 64, 00:20:54.318 "state": "configuring", 00:20:54.318 "raid_level": "raid0", 00:20:54.318 "superblock": false, 00:20:54.318 "num_base_bdevs": 4, 00:20:54.318 "num_base_bdevs_discovered": 2, 00:20:54.318 "num_base_bdevs_operational": 4, 00:20:54.318 "base_bdevs_list": [ 00:20:54.318 { 00:20:54.318 "name": "BaseBdev1", 00:20:54.318 "uuid": "5d45b20b-fbf6-4ef1-819a-67edff1c557b", 00:20:54.318 "is_configured": true, 00:20:54.318 "data_offset": 0, 00:20:54.318 "data_size": 65536 00:20:54.318 }, 00:20:54.318 { 00:20:54.318 "name": "BaseBdev2", 00:20:54.318 "uuid": "20079bde-3f0d-4cd6-a5d9-f4c3ccf4e68d", 00:20:54.318 "is_configured": true, 00:20:54.318 "data_offset": 0, 00:20:54.318 "data_size": 65536 00:20:54.318 }, 00:20:54.318 { 00:20:54.318 "name": "BaseBdev3", 00:20:54.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.319 "is_configured": false, 00:20:54.319 "data_offset": 0, 00:20:54.319 "data_size": 0 00:20:54.319 }, 00:20:54.319 { 00:20:54.319 "name": "BaseBdev4", 00:20:54.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.319 "is_configured": false, 00:20:54.319 "data_offset": 0, 00:20:54.319 "data_size": 0 00:20:54.319 } 00:20:54.319 ] 00:20:54.319 }' 00:20:54.319 12:40:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:54.319 12:40:36 -- common/autotest_common.sh@10 -- # set +x 00:20:54.888 12:40:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:55.147 [2024-10-01 12:40:37.469193] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:55.147 BaseBdev3 00:20:55.147 12:40:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:55.147 12:40:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:20:55.148 12:40:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:55.148 12:40:37 -- common/autotest_common.sh@889 -- # local i 00:20:55.148 12:40:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:55.148 12:40:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:55.148 12:40:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:55.148 12:40:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:55.407 [ 00:20:55.407 { 00:20:55.407 "name": "BaseBdev3", 00:20:55.407 "aliases": [ 00:20:55.407 "35ae1330-d96f-4510-bf9f-0865406b45cd" 00:20:55.407 ], 00:20:55.407 "product_name": "Malloc disk", 00:20:55.407 "block_size": 512, 00:20:55.407 "num_blocks": 65536, 00:20:55.407 "uuid": "35ae1330-d96f-4510-bf9f-0865406b45cd", 00:20:55.407 "assigned_rate_limits": { 00:20:55.407 "rw_ios_per_sec": 0, 00:20:55.407 "rw_mbytes_per_sec": 0, 00:20:55.407 "r_mbytes_per_sec": 0, 00:20:55.407 "w_mbytes_per_sec": 0 00:20:55.407 }, 00:20:55.407 "claimed": true, 00:20:55.407 "claim_type": "exclusive_write", 00:20:55.407 "zoned": false, 00:20:55.407 "supported_io_types": { 00:20:55.407 "read": true, 00:20:55.407 "write": true, 00:20:55.407 "unmap": true, 00:20:55.407 "write_zeroes": true, 00:20:55.407 "flush": true, 00:20:55.407 "reset": true, 00:20:55.407 "compare": false, 00:20:55.407 "compare_and_write": false, 00:20:55.407 "abort": true, 00:20:55.407 "nvme_admin": false, 00:20:55.407 "nvme_io": false 00:20:55.407 }, 00:20:55.407 "memory_domains": [ 00:20:55.407 { 00:20:55.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.407 "dma_device_type": 2 00:20:55.407 } 00:20:55.407 ], 00:20:55.407 "driver_specific": {} 00:20:55.407 } 00:20:55.407 ] 00:20:55.407 12:40:37 -- common/autotest_common.sh@895 -- # return 0 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.407 12:40:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.667 12:40:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.667 "name": "Existed_Raid", 00:20:55.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.667 "strip_size_kb": 64, 00:20:55.667 "state": "configuring", 00:20:55.667 "raid_level": "raid0", 00:20:55.667 "superblock": false, 00:20:55.667 "num_base_bdevs": 4, 00:20:55.667 "num_base_bdevs_discovered": 3, 00:20:55.667 "num_base_bdevs_operational": 4, 00:20:55.667 "base_bdevs_list": [ 00:20:55.667 { 00:20:55.667 "name": "BaseBdev1", 00:20:55.667 "uuid": "5d45b20b-fbf6-4ef1-819a-67edff1c557b", 00:20:55.667 "is_configured": true, 00:20:55.667 "data_offset": 0, 00:20:55.667 "data_size": 65536 00:20:55.667 }, 00:20:55.667 { 00:20:55.667 "name": "BaseBdev2", 00:20:55.667 "uuid": "20079bde-3f0d-4cd6-a5d9-f4c3ccf4e68d", 00:20:55.667 "is_configured": true, 00:20:55.667 "data_offset": 0, 00:20:55.667 "data_size": 65536 00:20:55.667 }, 00:20:55.667 { 00:20:55.667 "name": "BaseBdev3", 00:20:55.667 "uuid": "35ae1330-d96f-4510-bf9f-0865406b45cd", 00:20:55.667 "is_configured": true, 00:20:55.667 "data_offset": 0, 00:20:55.667 "data_size": 65536 00:20:55.667 }, 00:20:55.667 { 00:20:55.667 "name": "BaseBdev4", 00:20:55.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.667 "is_configured": false, 00:20:55.667 "data_offset": 0, 00:20:55.668 "data_size": 0 00:20:55.668 } 00:20:55.668 ] 00:20:55.668 }' 00:20:55.668 12:40:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.668 12:40:38 -- common/autotest_common.sh@10 -- # set +x 00:20:56.236 12:40:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:56.236 [2024-10-01 12:40:38.742482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:56.236 [2024-10-01 12:40:38.742527] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:20:56.236 [2024-10-01 12:40:38.742535] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:20:56.236 [2024-10-01 12:40:38.742666] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:56.236 [2024-10-01 12:40:38.743012] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:20:56.236 [2024-10-01 12:40:38.743023] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:20:56.236 [2024-10-01 12:40:38.743263] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.236 BaseBdev4 00:20:56.236 12:40:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:56.236 12:40:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:20:56.236 12:40:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:56.236 12:40:38 -- common/autotest_common.sh@889 -- # local i 00:20:56.236 12:40:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:56.236 12:40:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:56.236 12:40:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:56.494 12:40:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:56.753 [ 00:20:56.753 { 00:20:56.753 "name": "BaseBdev4", 00:20:56.753 "aliases": [ 00:20:56.753 "98710991-5151-4a0e-8847-b71d92f690d7" 00:20:56.753 ], 00:20:56.753 "product_name": "Malloc disk", 00:20:56.753 "block_size": 512, 00:20:56.753 "num_blocks": 65536, 00:20:56.753 "uuid": "98710991-5151-4a0e-8847-b71d92f690d7", 00:20:56.753 "assigned_rate_limits": { 00:20:56.753 "rw_ios_per_sec": 0, 00:20:56.753 "rw_mbytes_per_sec": 0, 00:20:56.753 "r_mbytes_per_sec": 0, 00:20:56.753 "w_mbytes_per_sec": 0 00:20:56.753 }, 00:20:56.753 "claimed": true, 00:20:56.753 "claim_type": "exclusive_write", 00:20:56.753 "zoned": false, 00:20:56.753 "supported_io_types": { 00:20:56.753 "read": true, 00:20:56.753 "write": true, 00:20:56.753 "unmap": true, 00:20:56.753 "write_zeroes": true, 00:20:56.753 "flush": true, 00:20:56.753 "reset": true, 00:20:56.753 "compare": false, 00:20:56.753 "compare_and_write": false, 00:20:56.753 "abort": true, 00:20:56.753 "nvme_admin": false, 00:20:56.753 "nvme_io": false 00:20:56.753 }, 00:20:56.753 "memory_domains": [ 00:20:56.753 { 00:20:56.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.753 "dma_device_type": 2 00:20:56.753 } 00:20:56.753 ], 00:20:56.753 "driver_specific": {} 00:20:56.753 } 00:20:56.753 ] 00:20:56.753 12:40:39 -- common/autotest_common.sh@895 -- # return 0 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.753 12:40:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:56.753 "name": "Existed_Raid", 00:20:56.753 "uuid": "f4946ab7-2c40-4d91-b407-7f2b938ec892", 00:20:56.753 "strip_size_kb": 64, 00:20:56.753 "state": "online", 00:20:56.753 "raid_level": "raid0", 00:20:56.753 "superblock": false, 00:20:56.753 "num_base_bdevs": 4, 00:20:56.753 "num_base_bdevs_discovered": 4, 00:20:56.753 "num_base_bdevs_operational": 4, 00:20:56.753 "base_bdevs_list": [ 00:20:56.753 { 00:20:56.753 "name": "BaseBdev1", 00:20:56.753 "uuid": "5d45b20b-fbf6-4ef1-819a-67edff1c557b", 00:20:56.753 "is_configured": true, 00:20:56.753 "data_offset": 0, 00:20:56.753 "data_size": 65536 00:20:56.753 }, 00:20:56.753 { 00:20:56.753 "name": "BaseBdev2", 00:20:56.753 "uuid": "20079bde-3f0d-4cd6-a5d9-f4c3ccf4e68d", 00:20:56.753 "is_configured": true, 00:20:56.754 "data_offset": 0, 00:20:56.754 "data_size": 65536 00:20:56.754 }, 00:20:56.754 { 00:20:56.754 "name": "BaseBdev3", 00:20:56.754 "uuid": "35ae1330-d96f-4510-bf9f-0865406b45cd", 00:20:56.754 "is_configured": true, 00:20:56.754 "data_offset": 0, 00:20:56.754 "data_size": 65536 00:20:56.754 }, 00:20:56.754 { 00:20:56.754 "name": "BaseBdev4", 00:20:56.754 "uuid": "98710991-5151-4a0e-8847-b71d92f690d7", 00:20:56.754 "is_configured": true, 00:20:56.754 "data_offset": 0, 00:20:56.754 "data_size": 65536 00:20:56.754 } 00:20:56.754 ] 00:20:56.754 }' 00:20:56.754 12:40:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:56.754 12:40:39 -- common/autotest_common.sh@10 -- # set +x 00:20:57.322 12:40:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:57.581 [2024-10-01 12:40:39.924842] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:57.581 [2024-10-01 12:40:39.924872] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:57.581 [2024-10-01 12:40:39.924930] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.581 12:40:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.840 12:40:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:57.840 "name": "Existed_Raid", 00:20:57.840 "uuid": "f4946ab7-2c40-4d91-b407-7f2b938ec892", 00:20:57.840 "strip_size_kb": 64, 00:20:57.840 "state": "offline", 00:20:57.840 "raid_level": "raid0", 00:20:57.840 "superblock": false, 00:20:57.840 "num_base_bdevs": 4, 00:20:57.840 "num_base_bdevs_discovered": 3, 00:20:57.840 "num_base_bdevs_operational": 3, 00:20:57.840 "base_bdevs_list": [ 00:20:57.840 { 00:20:57.840 "name": null, 00:20:57.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.840 "is_configured": false, 00:20:57.840 "data_offset": 0, 00:20:57.840 "data_size": 65536 00:20:57.840 }, 00:20:57.840 { 00:20:57.840 "name": "BaseBdev2", 00:20:57.840 "uuid": "20079bde-3f0d-4cd6-a5d9-f4c3ccf4e68d", 00:20:57.840 "is_configured": true, 00:20:57.840 "data_offset": 0, 00:20:57.840 "data_size": 65536 00:20:57.840 }, 00:20:57.840 { 00:20:57.840 "name": "BaseBdev3", 00:20:57.840 "uuid": "35ae1330-d96f-4510-bf9f-0865406b45cd", 00:20:57.840 "is_configured": true, 00:20:57.840 "data_offset": 0, 00:20:57.840 "data_size": 65536 00:20:57.840 }, 00:20:57.840 { 00:20:57.840 "name": "BaseBdev4", 00:20:57.840 "uuid": "98710991-5151-4a0e-8847-b71d92f690d7", 00:20:57.840 "is_configured": true, 00:20:57.840 "data_offset": 0, 00:20:57.840 "data_size": 65536 00:20:57.840 } 00:20:57.840 ] 00:20:57.840 }' 00:20:57.840 12:40:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:57.840 12:40:40 -- common/autotest_common.sh@10 -- # set +x 00:20:58.409 12:40:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:58.409 12:40:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:58.409 12:40:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:58.409 12:40:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.668 12:40:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:58.668 12:40:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:58.668 12:40:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:58.668 [2024-10-01 12:40:41.124443] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:58.927 12:40:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:58.927 12:40:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:58.927 12:40:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.927 12:40:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:58.927 12:40:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:58.927 12:40:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:58.927 12:40:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:59.186 [2024-10-01 12:40:41.548976] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:59.186 12:40:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:59.186 12:40:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:59.186 12:40:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:59.186 12:40:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.445 12:40:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:59.445 12:40:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:59.445 12:40:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:59.705 [2024-10-01 12:40:42.001606] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:59.705 [2024-10-01 12:40:42.001663] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:20:59.705 12:40:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:59.705 12:40:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:59.705 12:40:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.705 12:40:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:59.964 12:40:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:59.964 12:40:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:59.964 12:40:42 -- bdev/bdev_raid.sh@287 -- # killprocess 118953 00:20:59.964 12:40:42 -- common/autotest_common.sh@926 -- # '[' -z 118953 ']' 00:20:59.964 12:40:42 -- common/autotest_common.sh@930 -- # kill -0 118953 00:20:59.964 12:40:42 -- common/autotest_common.sh@931 -- # uname 00:20:59.964 12:40:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.964 12:40:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118953 00:20:59.964 12:40:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:59.964 12:40:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:59.964 12:40:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118953' 00:20:59.964 killing process with pid 118953 00:20:59.964 12:40:42 -- common/autotest_common.sh@945 -- # kill 118953 00:20:59.964 [2024-10-01 12:40:42.313681] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.964 [2024-10-01 12:40:42.313812] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:59.964 12:40:42 -- common/autotest_common.sh@950 -- # wait 118953 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:01.344 00:21:01.344 real 0m11.935s 00:21:01.344 user 0m20.252s 00:21:01.344 sys 0m1.902s 00:21:01.344 ************************************ 00:21:01.344 END TEST raid_state_function_test 00:21:01.344 ************************************ 00:21:01.344 12:40:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.344 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:21:01.344 12:40:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:21:01.344 12:40:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:01.344 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:01.344 ************************************ 00:21:01.344 START TEST raid_state_function_test_sb 00:21:01.344 ************************************ 00:21:01.344 12:40:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=119372 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119372' 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:01.344 Process raid pid: 119372 00:21:01.344 12:40:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119372 /var/tmp/spdk-raid.sock 00:21:01.344 12:40:43 -- common/autotest_common.sh@819 -- # '[' -z 119372 ']' 00:21:01.344 12:40:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:01.344 12:40:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.344 12:40:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:01.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:01.344 12:40:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.344 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:01.344 [2024-10-01 12:40:43.679023] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:01.344 [2024-10-01 12:40:43.679185] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.344 [2024-10-01 12:40:43.851307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.605 [2024-10-01 12:40:44.037810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.865 [2024-10-01 12:40:44.221558] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.124 12:40:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:02.124 12:40:44 -- common/autotest_common.sh@852 -- # return 0 00:21:02.124 12:40:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:02.124 [2024-10-01 12:40:44.624511] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:02.125 [2024-10-01 12:40:44.624611] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:02.125 [2024-10-01 12:40:44.624622] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:02.125 [2024-10-01 12:40:44.624645] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:02.125 [2024-10-01 12:40:44.624652] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:02.125 [2024-10-01 12:40:44.624690] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:02.125 [2024-10-01 12:40:44.624697] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:02.125 [2024-10-01 12:40:44.624725] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.125 12:40:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.384 12:40:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:02.384 "name": "Existed_Raid", 00:21:02.384 "uuid": "7443f18d-c3a9-4490-bea6-1e97d2e4df02", 00:21:02.384 "strip_size_kb": 64, 00:21:02.384 "state": "configuring", 00:21:02.384 "raid_level": "raid0", 00:21:02.384 "superblock": true, 00:21:02.384 "num_base_bdevs": 4, 00:21:02.384 "num_base_bdevs_discovered": 0, 00:21:02.384 "num_base_bdevs_operational": 4, 00:21:02.384 "base_bdevs_list": [ 00:21:02.384 { 00:21:02.384 "name": "BaseBdev1", 00:21:02.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.384 "is_configured": false, 00:21:02.384 "data_offset": 0, 00:21:02.384 "data_size": 0 00:21:02.384 }, 00:21:02.384 { 00:21:02.384 "name": "BaseBdev2", 00:21:02.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.384 "is_configured": false, 00:21:02.384 "data_offset": 0, 00:21:02.384 "data_size": 0 00:21:02.384 }, 00:21:02.384 { 00:21:02.384 "name": "BaseBdev3", 00:21:02.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.384 "is_configured": false, 00:21:02.384 "data_offset": 0, 00:21:02.384 "data_size": 0 00:21:02.384 }, 00:21:02.384 { 00:21:02.384 "name": "BaseBdev4", 00:21:02.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.384 "is_configured": false, 00:21:02.384 "data_offset": 0, 00:21:02.384 "data_size": 0 00:21:02.384 } 00:21:02.384 ] 00:21:02.384 }' 00:21:02.384 12:40:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:02.384 12:40:44 -- common/autotest_common.sh@10 -- # set +x 00:21:02.951 12:40:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:02.951 [2024-10-01 12:40:45.479125] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:02.951 [2024-10-01 12:40:45.479166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:21:03.211 12:40:45 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:03.211 [2024-10-01 12:40:45.670881] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:03.211 [2024-10-01 12:40:45.670941] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:03.211 [2024-10-01 12:40:45.670949] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:03.211 [2024-10-01 12:40:45.670973] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:03.211 [2024-10-01 12:40:45.670980] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:03.211 [2024-10-01 12:40:45.671014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:03.211 [2024-10-01 12:40:45.671020] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:03.211 [2024-10-01 12:40:45.671043] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:03.211 12:40:45 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:03.471 [2024-10-01 12:40:45.883813] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:03.471 BaseBdev1 00:21:03.471 12:40:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:03.471 12:40:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:03.471 12:40:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:03.471 12:40:45 -- common/autotest_common.sh@889 -- # local i 00:21:03.471 12:40:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:03.471 12:40:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:03.471 12:40:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:03.731 12:40:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:03.731 [ 00:21:03.731 { 00:21:03.731 "name": "BaseBdev1", 00:21:03.731 "aliases": [ 00:21:03.731 "38cef8ed-2837-49ec-a171-d40b27a67aa2" 00:21:03.731 ], 00:21:03.731 "product_name": "Malloc disk", 00:21:03.731 "block_size": 512, 00:21:03.731 "num_blocks": 65536, 00:21:03.731 "uuid": "38cef8ed-2837-49ec-a171-d40b27a67aa2", 00:21:03.731 "assigned_rate_limits": { 00:21:03.731 "rw_ios_per_sec": 0, 00:21:03.731 "rw_mbytes_per_sec": 0, 00:21:03.731 "r_mbytes_per_sec": 0, 00:21:03.731 "w_mbytes_per_sec": 0 00:21:03.731 }, 00:21:03.731 "claimed": true, 00:21:03.731 "claim_type": "exclusive_write", 00:21:03.731 "zoned": false, 00:21:03.731 "supported_io_types": { 00:21:03.731 "read": true, 00:21:03.731 "write": true, 00:21:03.731 "unmap": true, 00:21:03.731 "write_zeroes": true, 00:21:03.731 "flush": true, 00:21:03.731 "reset": true, 00:21:03.731 "compare": false, 00:21:03.731 "compare_and_write": false, 00:21:03.731 "abort": true, 00:21:03.731 "nvme_admin": false, 00:21:03.731 "nvme_io": false 00:21:03.731 }, 00:21:03.731 "memory_domains": [ 00:21:03.731 { 00:21:03.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.731 "dma_device_type": 2 00:21:03.731 } 00:21:03.731 ], 00:21:03.731 "driver_specific": {} 00:21:03.731 } 00:21:03.731 ] 00:21:03.731 12:40:46 -- common/autotest_common.sh@895 -- # return 0 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.731 12:40:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.990 12:40:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:03.990 "name": "Existed_Raid", 00:21:03.990 "uuid": "4b091ec1-5e48-48ef-b8ba-2d3eec0a0fa0", 00:21:03.990 "strip_size_kb": 64, 00:21:03.990 "state": "configuring", 00:21:03.990 "raid_level": "raid0", 00:21:03.990 "superblock": true, 00:21:03.990 "num_base_bdevs": 4, 00:21:03.990 "num_base_bdevs_discovered": 1, 00:21:03.990 "num_base_bdevs_operational": 4, 00:21:03.990 "base_bdevs_list": [ 00:21:03.990 { 00:21:03.990 "name": "BaseBdev1", 00:21:03.990 "uuid": "38cef8ed-2837-49ec-a171-d40b27a67aa2", 00:21:03.990 "is_configured": true, 00:21:03.990 "data_offset": 2048, 00:21:03.990 "data_size": 63488 00:21:03.990 }, 00:21:03.990 { 00:21:03.990 "name": "BaseBdev2", 00:21:03.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.990 "is_configured": false, 00:21:03.990 "data_offset": 0, 00:21:03.990 "data_size": 0 00:21:03.990 }, 00:21:03.990 { 00:21:03.990 "name": "BaseBdev3", 00:21:03.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.990 "is_configured": false, 00:21:03.990 "data_offset": 0, 00:21:03.990 "data_size": 0 00:21:03.990 }, 00:21:03.990 { 00:21:03.990 "name": "BaseBdev4", 00:21:03.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.990 "is_configured": false, 00:21:03.990 "data_offset": 0, 00:21:03.990 "data_size": 0 00:21:03.990 } 00:21:03.990 ] 00:21:03.990 }' 00:21:03.990 12:40:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:03.990 12:40:46 -- common/autotest_common.sh@10 -- # set +x 00:21:04.559 12:40:46 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:04.819 [2024-10-01 12:40:47.118019] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:04.819 [2024-10-01 12:40:47.118072] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:04.819 12:40:47 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:21:04.819 12:40:47 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:05.096 12:40:47 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:05.096 BaseBdev1 00:21:05.096 12:40:47 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:21:05.096 12:40:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:05.096 12:40:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:05.096 12:40:47 -- common/autotest_common.sh@889 -- # local i 00:21:05.096 12:40:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:05.096 12:40:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:05.096 12:40:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:05.355 12:40:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:05.613 [ 00:21:05.613 { 00:21:05.613 "name": "BaseBdev1", 00:21:05.613 "aliases": [ 00:21:05.613 "9d93f5d7-d83e-4c9f-bc0b-ac220391bc72" 00:21:05.613 ], 00:21:05.613 "product_name": "Malloc disk", 00:21:05.613 "block_size": 512, 00:21:05.613 "num_blocks": 65536, 00:21:05.613 "uuid": "9d93f5d7-d83e-4c9f-bc0b-ac220391bc72", 00:21:05.613 "assigned_rate_limits": { 00:21:05.613 "rw_ios_per_sec": 0, 00:21:05.613 "rw_mbytes_per_sec": 0, 00:21:05.613 "r_mbytes_per_sec": 0, 00:21:05.613 "w_mbytes_per_sec": 0 00:21:05.613 }, 00:21:05.613 "claimed": false, 00:21:05.613 "zoned": false, 00:21:05.613 "supported_io_types": { 00:21:05.613 "read": true, 00:21:05.613 "write": true, 00:21:05.613 "unmap": true, 00:21:05.613 "write_zeroes": true, 00:21:05.613 "flush": true, 00:21:05.613 "reset": true, 00:21:05.613 "compare": false, 00:21:05.613 "compare_and_write": false, 00:21:05.613 "abort": true, 00:21:05.613 "nvme_admin": false, 00:21:05.613 "nvme_io": false 00:21:05.613 }, 00:21:05.613 "memory_domains": [ 00:21:05.613 { 00:21:05.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.613 "dma_device_type": 2 00:21:05.613 } 00:21:05.613 ], 00:21:05.613 "driver_specific": {} 00:21:05.613 } 00:21:05.613 ] 00:21:05.613 12:40:47 -- common/autotest_common.sh@895 -- # return 0 00:21:05.613 12:40:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:05.613 [2024-10-01 12:40:48.101950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:05.613 [2024-10-01 12:40:48.103852] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:05.613 [2024-10-01 12:40:48.103953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:05.613 [2024-10-01 12:40:48.103963] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:05.613 [2024-10-01 12:40:48.103989] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:05.613 [2024-10-01 12:40:48.103996] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:05.613 [2024-10-01 12:40:48.104012] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.613 12:40:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.872 12:40:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.872 "name": "Existed_Raid", 00:21:05.872 "uuid": "e4a7841f-31c9-4b07-a742-51a2c776a723", 00:21:05.872 "strip_size_kb": 64, 00:21:05.872 "state": "configuring", 00:21:05.872 "raid_level": "raid0", 00:21:05.872 "superblock": true, 00:21:05.872 "num_base_bdevs": 4, 00:21:05.872 "num_base_bdevs_discovered": 1, 00:21:05.872 "num_base_bdevs_operational": 4, 00:21:05.872 "base_bdevs_list": [ 00:21:05.872 { 00:21:05.872 "name": "BaseBdev1", 00:21:05.872 "uuid": "9d93f5d7-d83e-4c9f-bc0b-ac220391bc72", 00:21:05.872 "is_configured": true, 00:21:05.872 "data_offset": 2048, 00:21:05.872 "data_size": 63488 00:21:05.872 }, 00:21:05.872 { 00:21:05.872 "name": "BaseBdev2", 00:21:05.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.872 "is_configured": false, 00:21:05.872 "data_offset": 0, 00:21:05.872 "data_size": 0 00:21:05.872 }, 00:21:05.872 { 00:21:05.872 "name": "BaseBdev3", 00:21:05.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.872 "is_configured": false, 00:21:05.872 "data_offset": 0, 00:21:05.872 "data_size": 0 00:21:05.872 }, 00:21:05.872 { 00:21:05.872 "name": "BaseBdev4", 00:21:05.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.872 "is_configured": false, 00:21:05.872 "data_offset": 0, 00:21:05.872 "data_size": 0 00:21:05.872 } 00:21:05.872 ] 00:21:05.872 }' 00:21:05.872 12:40:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.872 12:40:48 -- common/autotest_common.sh@10 -- # set +x 00:21:06.437 12:40:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:06.698 [2024-10-01 12:40:49.009080] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.698 BaseBdev2 00:21:06.698 12:40:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:06.698 12:40:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:21:06.698 12:40:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:06.698 12:40:49 -- common/autotest_common.sh@889 -- # local i 00:21:06.698 12:40:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:06.698 12:40:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:06.698 12:40:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:06.698 12:40:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:06.955 [ 00:21:06.955 { 00:21:06.955 "name": "BaseBdev2", 00:21:06.955 "aliases": [ 00:21:06.955 "f2244719-f5f5-4c99-a307-6ace58811fe8" 00:21:06.955 ], 00:21:06.955 "product_name": "Malloc disk", 00:21:06.955 "block_size": 512, 00:21:06.955 "num_blocks": 65536, 00:21:06.955 "uuid": "f2244719-f5f5-4c99-a307-6ace58811fe8", 00:21:06.955 "assigned_rate_limits": { 00:21:06.955 "rw_ios_per_sec": 0, 00:21:06.955 "rw_mbytes_per_sec": 0, 00:21:06.955 "r_mbytes_per_sec": 0, 00:21:06.955 "w_mbytes_per_sec": 0 00:21:06.955 }, 00:21:06.955 "claimed": true, 00:21:06.955 "claim_type": "exclusive_write", 00:21:06.955 "zoned": false, 00:21:06.955 "supported_io_types": { 00:21:06.955 "read": true, 00:21:06.955 "write": true, 00:21:06.955 "unmap": true, 00:21:06.955 "write_zeroes": true, 00:21:06.955 "flush": true, 00:21:06.955 "reset": true, 00:21:06.955 "compare": false, 00:21:06.955 "compare_and_write": false, 00:21:06.955 "abort": true, 00:21:06.955 "nvme_admin": false, 00:21:06.955 "nvme_io": false 00:21:06.955 }, 00:21:06.955 "memory_domains": [ 00:21:06.955 { 00:21:06.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.955 "dma_device_type": 2 00:21:06.955 } 00:21:06.955 ], 00:21:06.955 "driver_specific": {} 00:21:06.955 } 00:21:06.955 ] 00:21:06.955 12:40:49 -- common/autotest_common.sh@895 -- # return 0 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.955 12:40:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.227 12:40:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:07.227 "name": "Existed_Raid", 00:21:07.227 "uuid": "e4a7841f-31c9-4b07-a742-51a2c776a723", 00:21:07.227 "strip_size_kb": 64, 00:21:07.227 "state": "configuring", 00:21:07.227 "raid_level": "raid0", 00:21:07.227 "superblock": true, 00:21:07.227 "num_base_bdevs": 4, 00:21:07.227 "num_base_bdevs_discovered": 2, 00:21:07.227 "num_base_bdevs_operational": 4, 00:21:07.227 "base_bdevs_list": [ 00:21:07.227 { 00:21:07.227 "name": "BaseBdev1", 00:21:07.227 "uuid": "9d93f5d7-d83e-4c9f-bc0b-ac220391bc72", 00:21:07.227 "is_configured": true, 00:21:07.227 "data_offset": 2048, 00:21:07.227 "data_size": 63488 00:21:07.227 }, 00:21:07.227 { 00:21:07.227 "name": "BaseBdev2", 00:21:07.227 "uuid": "f2244719-f5f5-4c99-a307-6ace58811fe8", 00:21:07.227 "is_configured": true, 00:21:07.227 "data_offset": 2048, 00:21:07.227 "data_size": 63488 00:21:07.227 }, 00:21:07.227 { 00:21:07.227 "name": "BaseBdev3", 00:21:07.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.227 "is_configured": false, 00:21:07.227 "data_offset": 0, 00:21:07.227 "data_size": 0 00:21:07.227 }, 00:21:07.227 { 00:21:07.227 "name": "BaseBdev4", 00:21:07.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.227 "is_configured": false, 00:21:07.227 "data_offset": 0, 00:21:07.227 "data_size": 0 00:21:07.227 } 00:21:07.227 ] 00:21:07.227 }' 00:21:07.227 12:40:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:07.227 12:40:49 -- common/autotest_common.sh@10 -- # set +x 00:21:07.822 12:40:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:07.822 [2024-10-01 12:40:50.343861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:07.822 BaseBdev3 00:21:08.080 12:40:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:08.080 12:40:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:21:08.080 12:40:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:08.080 12:40:50 -- common/autotest_common.sh@889 -- # local i 00:21:08.080 12:40:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:08.080 12:40:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:08.080 12:40:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:08.080 12:40:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:08.338 [ 00:21:08.338 { 00:21:08.338 "name": "BaseBdev3", 00:21:08.338 "aliases": [ 00:21:08.338 "cd8ce5c9-7173-493e-a0ce-a0cd6212e98f" 00:21:08.338 ], 00:21:08.338 "product_name": "Malloc disk", 00:21:08.338 "block_size": 512, 00:21:08.338 "num_blocks": 65536, 00:21:08.338 "uuid": "cd8ce5c9-7173-493e-a0ce-a0cd6212e98f", 00:21:08.338 "assigned_rate_limits": { 00:21:08.338 "rw_ios_per_sec": 0, 00:21:08.338 "rw_mbytes_per_sec": 0, 00:21:08.338 "r_mbytes_per_sec": 0, 00:21:08.338 "w_mbytes_per_sec": 0 00:21:08.338 }, 00:21:08.338 "claimed": true, 00:21:08.338 "claim_type": "exclusive_write", 00:21:08.338 "zoned": false, 00:21:08.338 "supported_io_types": { 00:21:08.338 "read": true, 00:21:08.338 "write": true, 00:21:08.338 "unmap": true, 00:21:08.338 "write_zeroes": true, 00:21:08.338 "flush": true, 00:21:08.338 "reset": true, 00:21:08.338 "compare": false, 00:21:08.338 "compare_and_write": false, 00:21:08.338 "abort": true, 00:21:08.338 "nvme_admin": false, 00:21:08.338 "nvme_io": false 00:21:08.338 }, 00:21:08.338 "memory_domains": [ 00:21:08.338 { 00:21:08.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.338 "dma_device_type": 2 00:21:08.338 } 00:21:08.338 ], 00:21:08.338 "driver_specific": {} 00:21:08.338 } 00:21:08.338 ] 00:21:08.338 12:40:50 -- common/autotest_common.sh@895 -- # return 0 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.338 12:40:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.596 12:40:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:08.596 "name": "Existed_Raid", 00:21:08.596 "uuid": "e4a7841f-31c9-4b07-a742-51a2c776a723", 00:21:08.596 "strip_size_kb": 64, 00:21:08.596 "state": "configuring", 00:21:08.596 "raid_level": "raid0", 00:21:08.596 "superblock": true, 00:21:08.596 "num_base_bdevs": 4, 00:21:08.596 "num_base_bdevs_discovered": 3, 00:21:08.596 "num_base_bdevs_operational": 4, 00:21:08.596 "base_bdevs_list": [ 00:21:08.596 { 00:21:08.596 "name": "BaseBdev1", 00:21:08.596 "uuid": "9d93f5d7-d83e-4c9f-bc0b-ac220391bc72", 00:21:08.596 "is_configured": true, 00:21:08.596 "data_offset": 2048, 00:21:08.596 "data_size": 63488 00:21:08.596 }, 00:21:08.596 { 00:21:08.596 "name": "BaseBdev2", 00:21:08.596 "uuid": "f2244719-f5f5-4c99-a307-6ace58811fe8", 00:21:08.596 "is_configured": true, 00:21:08.596 "data_offset": 2048, 00:21:08.596 "data_size": 63488 00:21:08.596 }, 00:21:08.596 { 00:21:08.596 "name": "BaseBdev3", 00:21:08.596 "uuid": "cd8ce5c9-7173-493e-a0ce-a0cd6212e98f", 00:21:08.596 "is_configured": true, 00:21:08.596 "data_offset": 2048, 00:21:08.596 "data_size": 63488 00:21:08.596 }, 00:21:08.596 { 00:21:08.596 "name": "BaseBdev4", 00:21:08.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.596 "is_configured": false, 00:21:08.596 "data_offset": 0, 00:21:08.596 "data_size": 0 00:21:08.596 } 00:21:08.596 ] 00:21:08.596 }' 00:21:08.596 12:40:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:08.596 12:40:50 -- common/autotest_common.sh@10 -- # set +x 00:21:09.163 12:40:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:09.163 [2024-10-01 12:40:51.579780] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:09.163 [2024-10-01 12:40:51.580010] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:21:09.163 [2024-10-01 12:40:51.580024] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:09.163 [2024-10-01 12:40:51.580125] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:09.163 [2024-10-01 12:40:51.580439] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:21:09.163 [2024-10-01 12:40:51.580459] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:21:09.163 [2024-10-01 12:40:51.580587] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.163 BaseBdev4 00:21:09.163 12:40:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:09.163 12:40:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:21:09.163 12:40:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:09.163 12:40:51 -- common/autotest_common.sh@889 -- # local i 00:21:09.163 12:40:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:09.163 12:40:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:09.163 12:40:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:09.422 12:40:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:09.680 [ 00:21:09.680 { 00:21:09.680 "name": "BaseBdev4", 00:21:09.680 "aliases": [ 00:21:09.680 "56f22f1e-8fc7-42da-af61-3e6fc4593909" 00:21:09.680 ], 00:21:09.680 "product_name": "Malloc disk", 00:21:09.681 "block_size": 512, 00:21:09.681 "num_blocks": 65536, 00:21:09.681 "uuid": "56f22f1e-8fc7-42da-af61-3e6fc4593909", 00:21:09.681 "assigned_rate_limits": { 00:21:09.681 "rw_ios_per_sec": 0, 00:21:09.681 "rw_mbytes_per_sec": 0, 00:21:09.681 "r_mbytes_per_sec": 0, 00:21:09.681 "w_mbytes_per_sec": 0 00:21:09.681 }, 00:21:09.681 "claimed": true, 00:21:09.681 "claim_type": "exclusive_write", 00:21:09.681 "zoned": false, 00:21:09.681 "supported_io_types": { 00:21:09.681 "read": true, 00:21:09.681 "write": true, 00:21:09.681 "unmap": true, 00:21:09.681 "write_zeroes": true, 00:21:09.681 "flush": true, 00:21:09.681 "reset": true, 00:21:09.681 "compare": false, 00:21:09.681 "compare_and_write": false, 00:21:09.681 "abort": true, 00:21:09.681 "nvme_admin": false, 00:21:09.681 "nvme_io": false 00:21:09.681 }, 00:21:09.681 "memory_domains": [ 00:21:09.681 { 00:21:09.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.681 "dma_device_type": 2 00:21:09.681 } 00:21:09.681 ], 00:21:09.681 "driver_specific": {} 00:21:09.681 } 00:21:09.681 ] 00:21:09.681 12:40:51 -- common/autotest_common.sh@895 -- # return 0 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.681 12:40:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.681 12:40:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:09.681 "name": "Existed_Raid", 00:21:09.681 "uuid": "e4a7841f-31c9-4b07-a742-51a2c776a723", 00:21:09.681 "strip_size_kb": 64, 00:21:09.681 "state": "online", 00:21:09.681 "raid_level": "raid0", 00:21:09.681 "superblock": true, 00:21:09.681 "num_base_bdevs": 4, 00:21:09.681 "num_base_bdevs_discovered": 4, 00:21:09.681 "num_base_bdevs_operational": 4, 00:21:09.681 "base_bdevs_list": [ 00:21:09.681 { 00:21:09.681 "name": "BaseBdev1", 00:21:09.681 "uuid": "9d93f5d7-d83e-4c9f-bc0b-ac220391bc72", 00:21:09.681 "is_configured": true, 00:21:09.681 "data_offset": 2048, 00:21:09.681 "data_size": 63488 00:21:09.681 }, 00:21:09.681 { 00:21:09.681 "name": "BaseBdev2", 00:21:09.681 "uuid": "f2244719-f5f5-4c99-a307-6ace58811fe8", 00:21:09.681 "is_configured": true, 00:21:09.681 "data_offset": 2048, 00:21:09.681 "data_size": 63488 00:21:09.681 }, 00:21:09.681 { 00:21:09.681 "name": "BaseBdev3", 00:21:09.681 "uuid": "cd8ce5c9-7173-493e-a0ce-a0cd6212e98f", 00:21:09.681 "is_configured": true, 00:21:09.681 "data_offset": 2048, 00:21:09.681 "data_size": 63488 00:21:09.681 }, 00:21:09.681 { 00:21:09.681 "name": "BaseBdev4", 00:21:09.681 "uuid": "56f22f1e-8fc7-42da-af61-3e6fc4593909", 00:21:09.681 "is_configured": true, 00:21:09.681 "data_offset": 2048, 00:21:09.681 "data_size": 63488 00:21:09.681 } 00:21:09.681 ] 00:21:09.681 }' 00:21:09.681 12:40:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:09.681 12:40:52 -- common/autotest_common.sh@10 -- # set +x 00:21:10.248 12:40:52 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:10.507 [2024-10-01 12:40:52.833999] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:10.507 [2024-10-01 12:40:52.834026] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:10.507 [2024-10-01 12:40:52.834073] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.507 12:40:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.765 12:40:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:10.765 "name": "Existed_Raid", 00:21:10.765 "uuid": "e4a7841f-31c9-4b07-a742-51a2c776a723", 00:21:10.765 "strip_size_kb": 64, 00:21:10.765 "state": "offline", 00:21:10.765 "raid_level": "raid0", 00:21:10.765 "superblock": true, 00:21:10.765 "num_base_bdevs": 4, 00:21:10.765 "num_base_bdevs_discovered": 3, 00:21:10.765 "num_base_bdevs_operational": 3, 00:21:10.765 "base_bdevs_list": [ 00:21:10.765 { 00:21:10.765 "name": null, 00:21:10.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.765 "is_configured": false, 00:21:10.765 "data_offset": 2048, 00:21:10.765 "data_size": 63488 00:21:10.765 }, 00:21:10.765 { 00:21:10.765 "name": "BaseBdev2", 00:21:10.765 "uuid": "f2244719-f5f5-4c99-a307-6ace58811fe8", 00:21:10.765 "is_configured": true, 00:21:10.765 "data_offset": 2048, 00:21:10.765 "data_size": 63488 00:21:10.765 }, 00:21:10.765 { 00:21:10.765 "name": "BaseBdev3", 00:21:10.765 "uuid": "cd8ce5c9-7173-493e-a0ce-a0cd6212e98f", 00:21:10.765 "is_configured": true, 00:21:10.765 "data_offset": 2048, 00:21:10.765 "data_size": 63488 00:21:10.765 }, 00:21:10.765 { 00:21:10.765 "name": "BaseBdev4", 00:21:10.765 "uuid": "56f22f1e-8fc7-42da-af61-3e6fc4593909", 00:21:10.765 "is_configured": true, 00:21:10.765 "data_offset": 2048, 00:21:10.765 "data_size": 63488 00:21:10.765 } 00:21:10.765 ] 00:21:10.765 }' 00:21:10.765 12:40:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:10.765 12:40:53 -- common/autotest_common.sh@10 -- # set +x 00:21:11.331 12:40:53 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:11.331 12:40:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:11.331 12:40:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.331 12:40:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:11.331 12:40:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:11.331 12:40:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:11.331 12:40:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:11.590 [2024-10-01 12:40:54.011360] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:11.590 12:40:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:11.590 12:40:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:11.590 12:40:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.590 12:40:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:11.848 12:40:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:11.848 12:40:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:11.848 12:40:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:12.106 [2024-10-01 12:40:54.461681] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:12.106 12:40:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:12.106 12:40:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:12.106 12:40:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.106 12:40:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:12.365 12:40:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:12.365 12:40:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:12.365 12:40:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:12.365 [2024-10-01 12:40:54.890537] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:12.365 [2024-10-01 12:40:54.890603] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:21:12.623 12:40:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:12.623 12:40:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:12.623 12:40:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:12.623 12:40:54 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.881 12:40:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:12.881 12:40:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:12.881 12:40:55 -- bdev/bdev_raid.sh@287 -- # killprocess 119372 00:21:12.881 12:40:55 -- common/autotest_common.sh@926 -- # '[' -z 119372 ']' 00:21:12.881 12:40:55 -- common/autotest_common.sh@930 -- # kill -0 119372 00:21:12.881 12:40:55 -- common/autotest_common.sh@931 -- # uname 00:21:12.881 12:40:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:12.881 12:40:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119372 00:21:12.881 12:40:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:12.881 12:40:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:12.881 12:40:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119372' 00:21:12.881 killing process with pid 119372 00:21:12.881 12:40:55 -- common/autotest_common.sh@945 -- # kill 119372 00:21:12.881 [2024-10-01 12:40:55.244099] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:12.881 [2024-10-01 12:40:55.244217] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:12.881 12:40:55 -- common/autotest_common.sh@950 -- # wait 119372 00:21:13.819 ************************************ 00:21:13.819 END TEST raid_state_function_test_sb 00:21:13.819 ************************************ 00:21:13.819 12:40:56 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:13.819 00:21:13.819 real 0m12.717s 00:21:13.819 user 0m21.519s 00:21:13.819 sys 0m2.277s 00:21:13.819 12:40:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.819 12:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:21:14.078 12:40:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:21:14.078 12:40:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:14.078 12:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:14.078 ************************************ 00:21:14.078 START TEST raid_superblock_test 00:21:14.078 ************************************ 00:21:14.078 12:40:56 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@357 -- # raid_pid=119797 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@358 -- # waitforlisten 119797 /var/tmp/spdk-raid.sock 00:21:14.078 12:40:56 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:14.079 12:40:56 -- common/autotest_common.sh@819 -- # '[' -z 119797 ']' 00:21:14.079 12:40:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:14.079 12:40:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:14.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:14.079 12:40:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:14.079 12:40:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:14.079 12:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:14.079 [2024-10-01 12:40:56.457830] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:14.079 [2024-10-01 12:40:56.458379] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119797 ] 00:21:14.337 [2024-10-01 12:40:56.624667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.337 [2024-10-01 12:40:56.775406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.595 [2024-10-01 12:40:56.920419] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:14.854 12:40:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:14.854 12:40:57 -- common/autotest_common.sh@852 -- # return 0 00:21:14.854 12:40:57 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:21:14.854 12:40:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:14.854 12:40:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:21:14.854 12:40:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:21:14.854 12:40:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:14.854 12:40:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:14.854 12:40:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:14.854 12:40:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:14.854 12:40:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:15.112 malloc1 00:21:15.112 12:40:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:15.112 [2024-10-01 12:40:57.604492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:15.112 [2024-10-01 12:40:57.604581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.112 [2024-10-01 12:40:57.604610] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:15.112 [2024-10-01 12:40:57.604647] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.112 [2024-10-01 12:40:57.606812] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.112 [2024-10-01 12:40:57.606863] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:15.112 pt1 00:21:15.112 12:40:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:15.112 12:40:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:15.112 12:40:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:21:15.112 12:40:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:21:15.112 12:40:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:15.112 12:40:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:15.112 12:40:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:15.112 12:40:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:15.112 12:40:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:15.372 malloc2 00:21:15.372 12:40:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:15.630 [2024-10-01 12:40:58.041307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:15.630 [2024-10-01 12:40:58.041366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.630 [2024-10-01 12:40:58.041415] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:15.630 [2024-10-01 12:40:58.041462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.630 [2024-10-01 12:40:58.043555] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.630 [2024-10-01 12:40:58.043601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:15.630 pt2 00:21:15.630 12:40:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:15.630 12:40:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:15.630 12:40:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:21:15.630 12:40:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:21:15.630 12:40:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:15.630 12:40:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:15.630 12:40:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:15.630 12:40:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:15.630 12:40:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:15.888 malloc3 00:21:15.888 12:40:58 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:16.147 [2024-10-01 12:40:58.446119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:16.147 [2024-10-01 12:40:58.446180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.147 [2024-10-01 12:40:58.446232] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:16.147 [2024-10-01 12:40:58.446268] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.147 [2024-10-01 12:40:58.448421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.147 [2024-10-01 12:40:58.448473] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:16.147 pt3 00:21:16.147 12:40:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:16.147 12:40:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:16.147 12:40:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:21:16.147 12:40:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:21:16.147 12:40:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:16.147 12:40:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.147 12:40:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.147 12:40:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.147 12:40:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:16.147 malloc4 00:21:16.147 12:40:58 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:16.406 [2024-10-01 12:40:58.830931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:16.406 [2024-10-01 12:40:58.830997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.406 [2024-10-01 12:40:58.831041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:16.406 [2024-10-01 12:40:58.831076] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.406 [2024-10-01 12:40:58.833248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.406 [2024-10-01 12:40:58.833306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:16.406 pt4 00:21:16.406 12:40:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:16.406 12:40:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:16.406 12:40:58 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:16.664 [2024-10-01 12:40:59.002719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:16.664 [2024-10-01 12:40:59.004532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:16.664 [2024-10-01 12:40:59.004597] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:16.664 [2024-10-01 12:40:59.004657] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:16.664 [2024-10-01 12:40:59.004822] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:21:16.664 [2024-10-01 12:40:59.004831] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:16.664 [2024-10-01 12:40:59.004915] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:16.664 [2024-10-01 12:40:59.005188] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:21:16.664 [2024-10-01 12:40:59.005208] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:21:16.664 [2024-10-01 12:40:59.005332] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.664 12:40:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:16.664 "name": "raid_bdev1", 00:21:16.664 "uuid": "86453d94-969d-4e03-82dd-c3e0794bd70a", 00:21:16.664 "strip_size_kb": 64, 00:21:16.664 "state": "online", 00:21:16.664 "raid_level": "raid0", 00:21:16.665 "superblock": true, 00:21:16.665 "num_base_bdevs": 4, 00:21:16.665 "num_base_bdevs_discovered": 4, 00:21:16.665 "num_base_bdevs_operational": 4, 00:21:16.665 "base_bdevs_list": [ 00:21:16.665 { 00:21:16.665 "name": "pt1", 00:21:16.665 "uuid": "d84fe3c9-58fa-55c7-8abd-074d5886681b", 00:21:16.665 "is_configured": true, 00:21:16.665 "data_offset": 2048, 00:21:16.665 "data_size": 63488 00:21:16.665 }, 00:21:16.665 { 00:21:16.665 "name": "pt2", 00:21:16.665 "uuid": "82163e8e-fa3d-54e4-ae7c-9e22f1838889", 00:21:16.665 "is_configured": true, 00:21:16.665 "data_offset": 2048, 00:21:16.665 "data_size": 63488 00:21:16.665 }, 00:21:16.665 { 00:21:16.665 "name": "pt3", 00:21:16.665 "uuid": "e7350f5b-5fdb-5445-994c-3aa647ebc77c", 00:21:16.665 "is_configured": true, 00:21:16.665 "data_offset": 2048, 00:21:16.665 "data_size": 63488 00:21:16.665 }, 00:21:16.665 { 00:21:16.665 "name": "pt4", 00:21:16.665 "uuid": "da22b15e-e708-5e67-a6de-ee9b2a34a24f", 00:21:16.665 "is_configured": true, 00:21:16.665 "data_offset": 2048, 00:21:16.665 "data_size": 63488 00:21:16.665 } 00:21:16.665 ] 00:21:16.665 }' 00:21:16.665 12:40:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:16.665 12:40:59 -- common/autotest_common.sh@10 -- # set +x 00:21:17.232 12:40:59 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:17.232 12:40:59 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:21:17.491 [2024-10-01 12:40:59.909522] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:17.491 12:40:59 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=86453d94-969d-4e03-82dd-c3e0794bd70a 00:21:17.491 12:40:59 -- bdev/bdev_raid.sh@380 -- # '[' -z 86453d94-969d-4e03-82dd-c3e0794bd70a ']' 00:21:17.491 12:40:59 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:17.750 [2024-10-01 12:41:00.089059] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.750 [2024-10-01 12:41:00.089084] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:17.750 [2024-10-01 12:41:00.089153] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:17.750 [2024-10-01 12:41:00.089232] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:17.750 [2024-10-01 12:41:00.089242] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:21:17.750 12:41:00 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.750 12:41:00 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:21:18.009 12:41:00 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:21:18.009 12:41:00 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:21:18.009 12:41:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:18.009 12:41:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:18.009 12:41:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:18.009 12:41:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:18.267 12:41:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:18.267 12:41:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:18.526 12:41:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:18.526 12:41:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:18.526 12:41:01 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:18.526 12:41:01 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:18.784 12:41:01 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:21:18.784 12:41:01 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:18.784 12:41:01 -- common/autotest_common.sh@640 -- # local es=0 00:21:18.784 12:41:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:18.784 12:41:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:18.784 12:41:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.784 12:41:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:18.784 12:41:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.784 12:41:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:18.784 12:41:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.784 12:41:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:18.784 12:41:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:18.784 12:41:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:19.043 [2024-10-01 12:41:01.408234] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:19.043 [2024-10-01 12:41:01.410051] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:19.043 [2024-10-01 12:41:01.410093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:19.043 [2024-10-01 12:41:01.410125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:19.043 [2024-10-01 12:41:01.410162] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:21:19.043 [2024-10-01 12:41:01.410218] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:21:19.043 [2024-10-01 12:41:01.410259] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:21:19.043 [2024-10-01 12:41:01.410307] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:21:19.043 [2024-10-01 12:41:01.410328] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.043 [2024-10-01 12:41:01.410337] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:21:19.043 request: 00:21:19.043 { 00:21:19.043 "name": "raid_bdev1", 00:21:19.043 "raid_level": "raid0", 00:21:19.043 "base_bdevs": [ 00:21:19.043 "malloc1", 00:21:19.043 "malloc2", 00:21:19.043 "malloc3", 00:21:19.043 "malloc4" 00:21:19.043 ], 00:21:19.043 "superblock": false, 00:21:19.043 "strip_size_kb": 64, 00:21:19.043 "method": "bdev_raid_create", 00:21:19.043 "req_id": 1 00:21:19.043 } 00:21:19.043 Got JSON-RPC error response 00:21:19.043 response: 00:21:19.043 { 00:21:19.043 "code": -17, 00:21:19.043 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:19.043 } 00:21:19.043 12:41:01 -- common/autotest_common.sh@643 -- # es=1 00:21:19.043 12:41:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:19.043 12:41:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:19.043 12:41:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:19.043 12:41:01 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.043 12:41:01 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:21:19.301 12:41:01 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:21:19.301 12:41:01 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:21:19.301 12:41:01 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:19.301 [2024-10-01 12:41:01.787689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:19.301 [2024-10-01 12:41:01.787760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.301 [2024-10-01 12:41:01.787803] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:19.301 [2024-10-01 12:41:01.787829] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.301 [2024-10-01 12:41:01.790016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.301 [2024-10-01 12:41:01.790079] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:19.301 [2024-10-01 12:41:01.790188] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:19.301 [2024-10-01 12:41:01.790254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:19.301 pt1 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.302 12:41:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.560 12:41:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:19.560 "name": "raid_bdev1", 00:21:19.560 "uuid": "86453d94-969d-4e03-82dd-c3e0794bd70a", 00:21:19.560 "strip_size_kb": 64, 00:21:19.560 "state": "configuring", 00:21:19.560 "raid_level": "raid0", 00:21:19.560 "superblock": true, 00:21:19.560 "num_base_bdevs": 4, 00:21:19.560 "num_base_bdevs_discovered": 1, 00:21:19.560 "num_base_bdevs_operational": 4, 00:21:19.560 "base_bdevs_list": [ 00:21:19.560 { 00:21:19.560 "name": "pt1", 00:21:19.560 "uuid": "d84fe3c9-58fa-55c7-8abd-074d5886681b", 00:21:19.560 "is_configured": true, 00:21:19.560 "data_offset": 2048, 00:21:19.560 "data_size": 63488 00:21:19.560 }, 00:21:19.560 { 00:21:19.560 "name": null, 00:21:19.560 "uuid": "82163e8e-fa3d-54e4-ae7c-9e22f1838889", 00:21:19.560 "is_configured": false, 00:21:19.560 "data_offset": 2048, 00:21:19.560 "data_size": 63488 00:21:19.560 }, 00:21:19.560 { 00:21:19.560 "name": null, 00:21:19.560 "uuid": "e7350f5b-5fdb-5445-994c-3aa647ebc77c", 00:21:19.560 "is_configured": false, 00:21:19.560 "data_offset": 2048, 00:21:19.560 "data_size": 63488 00:21:19.560 }, 00:21:19.560 { 00:21:19.560 "name": null, 00:21:19.560 "uuid": "da22b15e-e708-5e67-a6de-ee9b2a34a24f", 00:21:19.560 "is_configured": false, 00:21:19.560 "data_offset": 2048, 00:21:19.560 "data_size": 63488 00:21:19.560 } 00:21:19.560 ] 00:21:19.560 }' 00:21:19.560 12:41:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:19.560 12:41:01 -- common/autotest_common.sh@10 -- # set +x 00:21:20.127 12:41:02 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:21:20.127 12:41:02 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:20.386 [2024-10-01 12:41:02.674388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:20.386 [2024-10-01 12:41:02.674466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.386 [2024-10-01 12:41:02.674501] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:20.386 [2024-10-01 12:41:02.674520] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.386 [2024-10-01 12:41:02.674937] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.386 [2024-10-01 12:41:02.674982] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:20.386 [2024-10-01 12:41:02.675079] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:20.386 [2024-10-01 12:41:02.675098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:20.386 pt2 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:20.386 [2024-10-01 12:41:02.878116] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.386 12:41:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.645 12:41:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:20.645 "name": "raid_bdev1", 00:21:20.645 "uuid": "86453d94-969d-4e03-82dd-c3e0794bd70a", 00:21:20.645 "strip_size_kb": 64, 00:21:20.645 "state": "configuring", 00:21:20.645 "raid_level": "raid0", 00:21:20.645 "superblock": true, 00:21:20.645 "num_base_bdevs": 4, 00:21:20.645 "num_base_bdevs_discovered": 1, 00:21:20.645 "num_base_bdevs_operational": 4, 00:21:20.645 "base_bdevs_list": [ 00:21:20.645 { 00:21:20.645 "name": "pt1", 00:21:20.645 "uuid": "d84fe3c9-58fa-55c7-8abd-074d5886681b", 00:21:20.645 "is_configured": true, 00:21:20.645 "data_offset": 2048, 00:21:20.645 "data_size": 63488 00:21:20.645 }, 00:21:20.645 { 00:21:20.645 "name": null, 00:21:20.645 "uuid": "82163e8e-fa3d-54e4-ae7c-9e22f1838889", 00:21:20.645 "is_configured": false, 00:21:20.645 "data_offset": 2048, 00:21:20.645 "data_size": 63488 00:21:20.645 }, 00:21:20.645 { 00:21:20.645 "name": null, 00:21:20.645 "uuid": "e7350f5b-5fdb-5445-994c-3aa647ebc77c", 00:21:20.645 "is_configured": false, 00:21:20.645 "data_offset": 2048, 00:21:20.645 "data_size": 63488 00:21:20.645 }, 00:21:20.645 { 00:21:20.645 "name": null, 00:21:20.645 "uuid": "da22b15e-e708-5e67-a6de-ee9b2a34a24f", 00:21:20.645 "is_configured": false, 00:21:20.645 "data_offset": 2048, 00:21:20.645 "data_size": 63488 00:21:20.645 } 00:21:20.645 ] 00:21:20.645 }' 00:21:20.645 12:41:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:20.645 12:41:03 -- common/autotest_common.sh@10 -- # set +x 00:21:21.212 12:41:03 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:21:21.212 12:41:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:21.212 12:41:03 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:21.471 [2024-10-01 12:41:03.793025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:21.471 [2024-10-01 12:41:03.793103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.471 [2024-10-01 12:41:03.793141] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:21.471 [2024-10-01 12:41:03.793163] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.471 [2024-10-01 12:41:03.793588] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.471 [2024-10-01 12:41:03.793644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:21.472 [2024-10-01 12:41:03.793744] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:21.472 [2024-10-01 12:41:03.793763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:21.472 pt2 00:21:21.472 12:41:03 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:21.472 12:41:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:21.472 12:41:03 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:21.472 [2024-10-01 12:41:03.972753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:21.472 [2024-10-01 12:41:03.972829] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.472 [2024-10-01 12:41:03.972859] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:21.472 [2024-10-01 12:41:03.972883] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.472 [2024-10-01 12:41:03.973297] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.472 [2024-10-01 12:41:03.973352] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:21.472 [2024-10-01 12:41:03.973447] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:21.472 [2024-10-01 12:41:03.973467] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:21.472 pt3 00:21:21.472 12:41:03 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:21.472 12:41:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:21.472 12:41:03 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:21.731 [2024-10-01 12:41:04.128508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:21.731 [2024-10-01 12:41:04.128572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.731 [2024-10-01 12:41:04.128605] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:21.731 [2024-10-01 12:41:04.128631] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.731 [2024-10-01 12:41:04.129012] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.731 [2024-10-01 12:41:04.129061] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:21.731 [2024-10-01 12:41:04.129157] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:21.731 [2024-10-01 12:41:04.129181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:21.731 [2024-10-01 12:41:04.129288] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:21:21.731 [2024-10-01 12:41:04.129296] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:21.731 [2024-10-01 12:41:04.129379] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:21.731 [2024-10-01 12:41:04.129646] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:21:21.731 [2024-10-01 12:41:04.129665] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:21:21.731 [2024-10-01 12:41:04.129773] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.731 pt4 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.731 12:41:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.989 12:41:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:21.989 "name": "raid_bdev1", 00:21:21.989 "uuid": "86453d94-969d-4e03-82dd-c3e0794bd70a", 00:21:21.989 "strip_size_kb": 64, 00:21:21.989 "state": "online", 00:21:21.989 "raid_level": "raid0", 00:21:21.989 "superblock": true, 00:21:21.989 "num_base_bdevs": 4, 00:21:21.989 "num_base_bdevs_discovered": 4, 00:21:21.989 "num_base_bdevs_operational": 4, 00:21:21.989 "base_bdevs_list": [ 00:21:21.990 { 00:21:21.990 "name": "pt1", 00:21:21.990 "uuid": "d84fe3c9-58fa-55c7-8abd-074d5886681b", 00:21:21.990 "is_configured": true, 00:21:21.990 "data_offset": 2048, 00:21:21.990 "data_size": 63488 00:21:21.990 }, 00:21:21.990 { 00:21:21.990 "name": "pt2", 00:21:21.990 "uuid": "82163e8e-fa3d-54e4-ae7c-9e22f1838889", 00:21:21.990 "is_configured": true, 00:21:21.990 "data_offset": 2048, 00:21:21.990 "data_size": 63488 00:21:21.990 }, 00:21:21.990 { 00:21:21.990 "name": "pt3", 00:21:21.990 "uuid": "e7350f5b-5fdb-5445-994c-3aa647ebc77c", 00:21:21.990 "is_configured": true, 00:21:21.990 "data_offset": 2048, 00:21:21.990 "data_size": 63488 00:21:21.990 }, 00:21:21.990 { 00:21:21.990 "name": "pt4", 00:21:21.990 "uuid": "da22b15e-e708-5e67-a6de-ee9b2a34a24f", 00:21:21.990 "is_configured": true, 00:21:21.990 "data_offset": 2048, 00:21:21.990 "data_size": 63488 00:21:21.990 } 00:21:21.990 ] 00:21:21.990 }' 00:21:21.990 12:41:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:21.990 12:41:04 -- common/autotest_common.sh@10 -- # set +x 00:21:22.557 12:41:04 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:22.557 12:41:04 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:21:22.557 [2024-10-01 12:41:04.971508] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.557 12:41:04 -- bdev/bdev_raid.sh@430 -- # '[' 86453d94-969d-4e03-82dd-c3e0794bd70a '!=' 86453d94-969d-4e03-82dd-c3e0794bd70a ']' 00:21:22.557 12:41:04 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:21:22.557 12:41:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:22.557 12:41:04 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:22.557 12:41:04 -- bdev/bdev_raid.sh@511 -- # killprocess 119797 00:21:22.557 12:41:04 -- common/autotest_common.sh@926 -- # '[' -z 119797 ']' 00:21:22.557 12:41:04 -- common/autotest_common.sh@930 -- # kill -0 119797 00:21:22.557 12:41:04 -- common/autotest_common.sh@931 -- # uname 00:21:22.557 12:41:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:22.557 12:41:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119797 00:21:22.557 12:41:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:22.557 12:41:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:22.557 killing process with pid 119797 00:21:22.557 12:41:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119797' 00:21:22.557 12:41:05 -- common/autotest_common.sh@945 -- # kill 119797 00:21:22.557 [2024-10-01 12:41:05.020811] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:22.557 [2024-10-01 12:41:05.020877] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.557 [2024-10-01 12:41:05.020936] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.557 12:41:05 -- common/autotest_common.sh@950 -- # wait 119797 00:21:22.557 [2024-10-01 12:41:05.020944] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:21:22.816 [2024-10-01 12:41:05.330823] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@513 -- # return 0 00:21:24.198 ************************************ 00:21:24.198 END TEST raid_superblock_test 00:21:24.198 ************************************ 00:21:24.198 00:21:24.198 real 0m9.996s 00:21:24.198 user 0m16.540s 00:21:24.198 sys 0m1.654s 00:21:24.198 12:41:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:24.198 12:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:21:24.198 12:41:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:21:24.198 12:41:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:24.198 12:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:24.198 ************************************ 00:21:24.198 START TEST raid_state_function_test 00:21:24.198 ************************************ 00:21:24.198 12:41:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:24.198 12:41:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=120106 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:24.199 Process raid pid: 120106 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120106' 00:21:24.199 12:41:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120106 /var/tmp/spdk-raid.sock 00:21:24.199 12:41:06 -- common/autotest_common.sh@819 -- # '[' -z 120106 ']' 00:21:24.199 12:41:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:24.199 12:41:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:24.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:24.199 12:41:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:24.199 12:41:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:24.199 12:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:24.199 [2024-10-01 12:41:06.549843] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:24.199 [2024-10-01 12:41:06.550007] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.199 [2024-10-01 12:41:06.716065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.458 [2024-10-01 12:41:06.865775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.717 [2024-10-01 12:41:07.017998] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.976 12:41:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:24.976 12:41:07 -- common/autotest_common.sh@852 -- # return 0 00:21:24.976 12:41:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:25.234 [2024-10-01 12:41:07.533415] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:25.234 [2024-10-01 12:41:07.533490] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:25.234 [2024-10-01 12:41:07.533499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.234 [2024-10-01 12:41:07.533534] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.234 [2024-10-01 12:41:07.533541] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:25.234 [2024-10-01 12:41:07.533573] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:25.234 [2024-10-01 12:41:07.533580] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:25.234 [2024-10-01 12:41:07.533601] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.234 "name": "Existed_Raid", 00:21:25.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.234 "strip_size_kb": 64, 00:21:25.234 "state": "configuring", 00:21:25.234 "raid_level": "concat", 00:21:25.234 "superblock": false, 00:21:25.234 "num_base_bdevs": 4, 00:21:25.234 "num_base_bdevs_discovered": 0, 00:21:25.234 "num_base_bdevs_operational": 4, 00:21:25.234 "base_bdevs_list": [ 00:21:25.234 { 00:21:25.234 "name": "BaseBdev1", 00:21:25.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.234 "is_configured": false, 00:21:25.234 "data_offset": 0, 00:21:25.234 "data_size": 0 00:21:25.234 }, 00:21:25.234 { 00:21:25.234 "name": "BaseBdev2", 00:21:25.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.234 "is_configured": false, 00:21:25.234 "data_offset": 0, 00:21:25.234 "data_size": 0 00:21:25.234 }, 00:21:25.234 { 00:21:25.234 "name": "BaseBdev3", 00:21:25.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.234 "is_configured": false, 00:21:25.234 "data_offset": 0, 00:21:25.234 "data_size": 0 00:21:25.234 }, 00:21:25.234 { 00:21:25.234 "name": "BaseBdev4", 00:21:25.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.234 "is_configured": false, 00:21:25.234 "data_offset": 0, 00:21:25.234 "data_size": 0 00:21:25.234 } 00:21:25.234 ] 00:21:25.234 }' 00:21:25.234 12:41:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.234 12:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:25.802 12:41:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:26.061 [2024-10-01 12:41:08.428021] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:26.061 [2024-10-01 12:41:08.428053] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:21:26.061 12:41:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:26.320 [2024-10-01 12:41:08.616034] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:26.320 [2024-10-01 12:41:08.616099] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:26.320 [2024-10-01 12:41:08.616108] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:26.320 [2024-10-01 12:41:08.616146] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:26.320 [2024-10-01 12:41:08.616153] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:26.320 [2024-10-01 12:41:08.616186] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:26.320 [2024-10-01 12:41:08.616192] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:26.320 [2024-10-01 12:41:08.616213] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:26.320 12:41:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:26.320 [2024-10-01 12:41:08.826791] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.320 BaseBdev1 00:21:26.320 12:41:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:26.320 12:41:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:26.320 12:41:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:26.320 12:41:08 -- common/autotest_common.sh@889 -- # local i 00:21:26.320 12:41:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:26.320 12:41:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:26.320 12:41:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:26.579 12:41:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:26.863 [ 00:21:26.863 { 00:21:26.863 "name": "BaseBdev1", 00:21:26.863 "aliases": [ 00:21:26.863 "12bbd2cb-8f3e-498e-bb44-c522b9847a0a" 00:21:26.863 ], 00:21:26.863 "product_name": "Malloc disk", 00:21:26.863 "block_size": 512, 00:21:26.863 "num_blocks": 65536, 00:21:26.863 "uuid": "12bbd2cb-8f3e-498e-bb44-c522b9847a0a", 00:21:26.863 "assigned_rate_limits": { 00:21:26.863 "rw_ios_per_sec": 0, 00:21:26.863 "rw_mbytes_per_sec": 0, 00:21:26.863 "r_mbytes_per_sec": 0, 00:21:26.863 "w_mbytes_per_sec": 0 00:21:26.863 }, 00:21:26.863 "claimed": true, 00:21:26.863 "claim_type": "exclusive_write", 00:21:26.863 "zoned": false, 00:21:26.863 "supported_io_types": { 00:21:26.863 "read": true, 00:21:26.863 "write": true, 00:21:26.863 "unmap": true, 00:21:26.863 "write_zeroes": true, 00:21:26.863 "flush": true, 00:21:26.863 "reset": true, 00:21:26.863 "compare": false, 00:21:26.863 "compare_and_write": false, 00:21:26.863 "abort": true, 00:21:26.863 "nvme_admin": false, 00:21:26.863 "nvme_io": false 00:21:26.863 }, 00:21:26.863 "memory_domains": [ 00:21:26.863 { 00:21:26.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.863 "dma_device_type": 2 00:21:26.863 } 00:21:26.863 ], 00:21:26.863 "driver_specific": {} 00:21:26.863 } 00:21:26.863 ] 00:21:26.863 12:41:09 -- common/autotest_common.sh@895 -- # return 0 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.863 12:41:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.140 12:41:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:27.140 "name": "Existed_Raid", 00:21:27.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.140 "strip_size_kb": 64, 00:21:27.140 "state": "configuring", 00:21:27.140 "raid_level": "concat", 00:21:27.140 "superblock": false, 00:21:27.140 "num_base_bdevs": 4, 00:21:27.140 "num_base_bdevs_discovered": 1, 00:21:27.140 "num_base_bdevs_operational": 4, 00:21:27.140 "base_bdevs_list": [ 00:21:27.140 { 00:21:27.140 "name": "BaseBdev1", 00:21:27.140 "uuid": "12bbd2cb-8f3e-498e-bb44-c522b9847a0a", 00:21:27.140 "is_configured": true, 00:21:27.140 "data_offset": 0, 00:21:27.140 "data_size": 65536 00:21:27.140 }, 00:21:27.140 { 00:21:27.140 "name": "BaseBdev2", 00:21:27.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.140 "is_configured": false, 00:21:27.140 "data_offset": 0, 00:21:27.140 "data_size": 0 00:21:27.140 }, 00:21:27.140 { 00:21:27.140 "name": "BaseBdev3", 00:21:27.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.140 "is_configured": false, 00:21:27.140 "data_offset": 0, 00:21:27.140 "data_size": 0 00:21:27.140 }, 00:21:27.140 { 00:21:27.140 "name": "BaseBdev4", 00:21:27.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.140 "is_configured": false, 00:21:27.141 "data_offset": 0, 00:21:27.141 "data_size": 0 00:21:27.141 } 00:21:27.141 ] 00:21:27.141 }' 00:21:27.141 12:41:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:27.141 12:41:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.400 12:41:09 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:27.660 [2024-10-01 12:41:10.088985] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:27.660 [2024-10-01 12:41:10.089035] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:27.660 12:41:10 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:21:27.660 12:41:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:27.920 [2024-10-01 12:41:10.268781] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:27.920 [2024-10-01 12:41:10.270663] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.920 [2024-10-01 12:41:10.270733] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.920 [2024-10-01 12:41:10.270742] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:27.920 [2024-10-01 12:41:10.270780] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:27.920 [2024-10-01 12:41:10.270787] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:27.920 [2024-10-01 12:41:10.270801] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.920 12:41:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.180 12:41:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:28.180 "name": "Existed_Raid", 00:21:28.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.180 "strip_size_kb": 64, 00:21:28.180 "state": "configuring", 00:21:28.180 "raid_level": "concat", 00:21:28.180 "superblock": false, 00:21:28.180 "num_base_bdevs": 4, 00:21:28.180 "num_base_bdevs_discovered": 1, 00:21:28.180 "num_base_bdevs_operational": 4, 00:21:28.180 "base_bdevs_list": [ 00:21:28.180 { 00:21:28.180 "name": "BaseBdev1", 00:21:28.180 "uuid": "12bbd2cb-8f3e-498e-bb44-c522b9847a0a", 00:21:28.180 "is_configured": true, 00:21:28.180 "data_offset": 0, 00:21:28.180 "data_size": 65536 00:21:28.180 }, 00:21:28.180 { 00:21:28.180 "name": "BaseBdev2", 00:21:28.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.180 "is_configured": false, 00:21:28.180 "data_offset": 0, 00:21:28.180 "data_size": 0 00:21:28.180 }, 00:21:28.180 { 00:21:28.180 "name": "BaseBdev3", 00:21:28.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.180 "is_configured": false, 00:21:28.180 "data_offset": 0, 00:21:28.180 "data_size": 0 00:21:28.180 }, 00:21:28.180 { 00:21:28.180 "name": "BaseBdev4", 00:21:28.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.180 "is_configured": false, 00:21:28.180 "data_offset": 0, 00:21:28.180 "data_size": 0 00:21:28.180 } 00:21:28.180 ] 00:21:28.180 }' 00:21:28.180 12:41:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:28.180 12:41:10 -- common/autotest_common.sh@10 -- # set +x 00:21:28.748 12:41:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:28.748 [2024-10-01 12:41:11.231099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.748 BaseBdev2 00:21:28.748 12:41:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:28.748 12:41:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:21:28.748 12:41:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:28.748 12:41:11 -- common/autotest_common.sh@889 -- # local i 00:21:28.748 12:41:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:28.748 12:41:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:28.748 12:41:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:29.006 12:41:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:29.266 [ 00:21:29.266 { 00:21:29.266 "name": "BaseBdev2", 00:21:29.266 "aliases": [ 00:21:29.266 "9f409d53-8c90-4794-b409-73035fd22573" 00:21:29.266 ], 00:21:29.266 "product_name": "Malloc disk", 00:21:29.266 "block_size": 512, 00:21:29.266 "num_blocks": 65536, 00:21:29.266 "uuid": "9f409d53-8c90-4794-b409-73035fd22573", 00:21:29.266 "assigned_rate_limits": { 00:21:29.266 "rw_ios_per_sec": 0, 00:21:29.266 "rw_mbytes_per_sec": 0, 00:21:29.266 "r_mbytes_per_sec": 0, 00:21:29.266 "w_mbytes_per_sec": 0 00:21:29.266 }, 00:21:29.266 "claimed": true, 00:21:29.266 "claim_type": "exclusive_write", 00:21:29.266 "zoned": false, 00:21:29.266 "supported_io_types": { 00:21:29.266 "read": true, 00:21:29.266 "write": true, 00:21:29.266 "unmap": true, 00:21:29.266 "write_zeroes": true, 00:21:29.266 "flush": true, 00:21:29.266 "reset": true, 00:21:29.266 "compare": false, 00:21:29.266 "compare_and_write": false, 00:21:29.266 "abort": true, 00:21:29.266 "nvme_admin": false, 00:21:29.266 "nvme_io": false 00:21:29.266 }, 00:21:29.266 "memory_domains": [ 00:21:29.266 { 00:21:29.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.266 "dma_device_type": 2 00:21:29.266 } 00:21:29.266 ], 00:21:29.266 "driver_specific": {} 00:21:29.266 } 00:21:29.266 ] 00:21:29.266 12:41:11 -- common/autotest_common.sh@895 -- # return 0 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.266 12:41:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:29.266 "name": "Existed_Raid", 00:21:29.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.266 "strip_size_kb": 64, 00:21:29.266 "state": "configuring", 00:21:29.266 "raid_level": "concat", 00:21:29.266 "superblock": false, 00:21:29.266 "num_base_bdevs": 4, 00:21:29.266 "num_base_bdevs_discovered": 2, 00:21:29.266 "num_base_bdevs_operational": 4, 00:21:29.266 "base_bdevs_list": [ 00:21:29.266 { 00:21:29.266 "name": "BaseBdev1", 00:21:29.266 "uuid": "12bbd2cb-8f3e-498e-bb44-c522b9847a0a", 00:21:29.266 "is_configured": true, 00:21:29.266 "data_offset": 0, 00:21:29.267 "data_size": 65536 00:21:29.267 }, 00:21:29.267 { 00:21:29.267 "name": "BaseBdev2", 00:21:29.267 "uuid": "9f409d53-8c90-4794-b409-73035fd22573", 00:21:29.267 "is_configured": true, 00:21:29.267 "data_offset": 0, 00:21:29.267 "data_size": 65536 00:21:29.267 }, 00:21:29.267 { 00:21:29.267 "name": "BaseBdev3", 00:21:29.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.267 "is_configured": false, 00:21:29.267 "data_offset": 0, 00:21:29.267 "data_size": 0 00:21:29.267 }, 00:21:29.267 { 00:21:29.267 "name": "BaseBdev4", 00:21:29.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.267 "is_configured": false, 00:21:29.267 "data_offset": 0, 00:21:29.267 "data_size": 0 00:21:29.267 } 00:21:29.267 ] 00:21:29.267 }' 00:21:29.267 12:41:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:29.267 12:41:11 -- common/autotest_common.sh@10 -- # set +x 00:21:29.835 12:41:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:30.094 [2024-10-01 12:41:12.474041] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:30.094 BaseBdev3 00:21:30.094 12:41:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:30.094 12:41:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:21:30.094 12:41:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:30.094 12:41:12 -- common/autotest_common.sh@889 -- # local i 00:21:30.094 12:41:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:30.094 12:41:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:30.094 12:41:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:30.354 12:41:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:30.354 [ 00:21:30.354 { 00:21:30.354 "name": "BaseBdev3", 00:21:30.354 "aliases": [ 00:21:30.354 "b4589313-07b1-40de-afdb-75be67a7d5ea" 00:21:30.354 ], 00:21:30.354 "product_name": "Malloc disk", 00:21:30.354 "block_size": 512, 00:21:30.354 "num_blocks": 65536, 00:21:30.354 "uuid": "b4589313-07b1-40de-afdb-75be67a7d5ea", 00:21:30.354 "assigned_rate_limits": { 00:21:30.354 "rw_ios_per_sec": 0, 00:21:30.354 "rw_mbytes_per_sec": 0, 00:21:30.354 "r_mbytes_per_sec": 0, 00:21:30.354 "w_mbytes_per_sec": 0 00:21:30.354 }, 00:21:30.354 "claimed": true, 00:21:30.354 "claim_type": "exclusive_write", 00:21:30.354 "zoned": false, 00:21:30.354 "supported_io_types": { 00:21:30.354 "read": true, 00:21:30.354 "write": true, 00:21:30.354 "unmap": true, 00:21:30.354 "write_zeroes": true, 00:21:30.354 "flush": true, 00:21:30.354 "reset": true, 00:21:30.354 "compare": false, 00:21:30.354 "compare_and_write": false, 00:21:30.354 "abort": true, 00:21:30.354 "nvme_admin": false, 00:21:30.354 "nvme_io": false 00:21:30.354 }, 00:21:30.354 "memory_domains": [ 00:21:30.354 { 00:21:30.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.354 "dma_device_type": 2 00:21:30.354 } 00:21:30.354 ], 00:21:30.354 "driver_specific": {} 00:21:30.354 } 00:21:30.354 ] 00:21:30.354 12:41:12 -- common/autotest_common.sh@895 -- # return 0 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.354 12:41:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.614 12:41:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:30.614 "name": "Existed_Raid", 00:21:30.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.614 "strip_size_kb": 64, 00:21:30.614 "state": "configuring", 00:21:30.614 "raid_level": "concat", 00:21:30.614 "superblock": false, 00:21:30.614 "num_base_bdevs": 4, 00:21:30.614 "num_base_bdevs_discovered": 3, 00:21:30.614 "num_base_bdevs_operational": 4, 00:21:30.614 "base_bdevs_list": [ 00:21:30.614 { 00:21:30.614 "name": "BaseBdev1", 00:21:30.614 "uuid": "12bbd2cb-8f3e-498e-bb44-c522b9847a0a", 00:21:30.614 "is_configured": true, 00:21:30.614 "data_offset": 0, 00:21:30.614 "data_size": 65536 00:21:30.614 }, 00:21:30.614 { 00:21:30.614 "name": "BaseBdev2", 00:21:30.614 "uuid": "9f409d53-8c90-4794-b409-73035fd22573", 00:21:30.614 "is_configured": true, 00:21:30.614 "data_offset": 0, 00:21:30.614 "data_size": 65536 00:21:30.614 }, 00:21:30.614 { 00:21:30.614 "name": "BaseBdev3", 00:21:30.614 "uuid": "b4589313-07b1-40de-afdb-75be67a7d5ea", 00:21:30.614 "is_configured": true, 00:21:30.614 "data_offset": 0, 00:21:30.614 "data_size": 65536 00:21:30.614 }, 00:21:30.614 { 00:21:30.614 "name": "BaseBdev4", 00:21:30.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.614 "is_configured": false, 00:21:30.614 "data_offset": 0, 00:21:30.614 "data_size": 0 00:21:30.614 } 00:21:30.614 ] 00:21:30.614 }' 00:21:30.614 12:41:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:30.614 12:41:13 -- common/autotest_common.sh@10 -- # set +x 00:21:31.183 12:41:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:31.442 [2024-10-01 12:41:13.746349] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:31.442 [2024-10-01 12:41:13.746393] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:21:31.442 [2024-10-01 12:41:13.746400] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:31.442 [2024-10-01 12:41:13.746523] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:31.442 [2024-10-01 12:41:13.746830] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:21:31.443 [2024-10-01 12:41:13.746840] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:21:31.443 [2024-10-01 12:41:13.747048] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.443 BaseBdev4 00:21:31.443 12:41:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:31.443 12:41:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:21:31.443 12:41:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:31.443 12:41:13 -- common/autotest_common.sh@889 -- # local i 00:21:31.443 12:41:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:31.443 12:41:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:31.443 12:41:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:31.443 12:41:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:31.702 [ 00:21:31.702 { 00:21:31.702 "name": "BaseBdev4", 00:21:31.702 "aliases": [ 00:21:31.702 "b17f74e1-7493-49c5-bda9-9f729197d69a" 00:21:31.702 ], 00:21:31.702 "product_name": "Malloc disk", 00:21:31.702 "block_size": 512, 00:21:31.702 "num_blocks": 65536, 00:21:31.702 "uuid": "b17f74e1-7493-49c5-bda9-9f729197d69a", 00:21:31.702 "assigned_rate_limits": { 00:21:31.702 "rw_ios_per_sec": 0, 00:21:31.702 "rw_mbytes_per_sec": 0, 00:21:31.702 "r_mbytes_per_sec": 0, 00:21:31.702 "w_mbytes_per_sec": 0 00:21:31.702 }, 00:21:31.702 "claimed": true, 00:21:31.702 "claim_type": "exclusive_write", 00:21:31.702 "zoned": false, 00:21:31.702 "supported_io_types": { 00:21:31.702 "read": true, 00:21:31.702 "write": true, 00:21:31.702 "unmap": true, 00:21:31.702 "write_zeroes": true, 00:21:31.702 "flush": true, 00:21:31.702 "reset": true, 00:21:31.702 "compare": false, 00:21:31.702 "compare_and_write": false, 00:21:31.702 "abort": true, 00:21:31.702 "nvme_admin": false, 00:21:31.702 "nvme_io": false 00:21:31.702 }, 00:21:31.702 "memory_domains": [ 00:21:31.702 { 00:21:31.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.702 "dma_device_type": 2 00:21:31.702 } 00:21:31.702 ], 00:21:31.702 "driver_specific": {} 00:21:31.702 } 00:21:31.702 ] 00:21:31.702 12:41:14 -- common/autotest_common.sh@895 -- # return 0 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.702 12:41:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.962 12:41:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:31.962 "name": "Existed_Raid", 00:21:31.962 "uuid": "9186ed5b-f3f1-4280-9947-2a2d14462c97", 00:21:31.962 "strip_size_kb": 64, 00:21:31.962 "state": "online", 00:21:31.962 "raid_level": "concat", 00:21:31.962 "superblock": false, 00:21:31.962 "num_base_bdevs": 4, 00:21:31.962 "num_base_bdevs_discovered": 4, 00:21:31.962 "num_base_bdevs_operational": 4, 00:21:31.962 "base_bdevs_list": [ 00:21:31.962 { 00:21:31.962 "name": "BaseBdev1", 00:21:31.962 "uuid": "12bbd2cb-8f3e-498e-bb44-c522b9847a0a", 00:21:31.962 "is_configured": true, 00:21:31.962 "data_offset": 0, 00:21:31.962 "data_size": 65536 00:21:31.962 }, 00:21:31.962 { 00:21:31.962 "name": "BaseBdev2", 00:21:31.962 "uuid": "9f409d53-8c90-4794-b409-73035fd22573", 00:21:31.962 "is_configured": true, 00:21:31.962 "data_offset": 0, 00:21:31.962 "data_size": 65536 00:21:31.962 }, 00:21:31.962 { 00:21:31.962 "name": "BaseBdev3", 00:21:31.962 "uuid": "b4589313-07b1-40de-afdb-75be67a7d5ea", 00:21:31.962 "is_configured": true, 00:21:31.962 "data_offset": 0, 00:21:31.962 "data_size": 65536 00:21:31.962 }, 00:21:31.962 { 00:21:31.962 "name": "BaseBdev4", 00:21:31.962 "uuid": "b17f74e1-7493-49c5-bda9-9f729197d69a", 00:21:31.962 "is_configured": true, 00:21:31.962 "data_offset": 0, 00:21:31.962 "data_size": 65536 00:21:31.962 } 00:21:31.962 ] 00:21:31.962 }' 00:21:31.962 12:41:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:31.962 12:41:14 -- common/autotest_common.sh@10 -- # set +x 00:21:32.531 12:41:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:32.531 [2024-10-01 12:41:15.000617] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:32.531 [2024-10-01 12:41:15.000650] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:32.531 [2024-10-01 12:41:15.000714] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:32.789 "name": "Existed_Raid", 00:21:32.789 "uuid": "9186ed5b-f3f1-4280-9947-2a2d14462c97", 00:21:32.789 "strip_size_kb": 64, 00:21:32.789 "state": "offline", 00:21:32.789 "raid_level": "concat", 00:21:32.789 "superblock": false, 00:21:32.789 "num_base_bdevs": 4, 00:21:32.789 "num_base_bdevs_discovered": 3, 00:21:32.789 "num_base_bdevs_operational": 3, 00:21:32.789 "base_bdevs_list": [ 00:21:32.789 { 00:21:32.789 "name": null, 00:21:32.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.789 "is_configured": false, 00:21:32.789 "data_offset": 0, 00:21:32.789 "data_size": 65536 00:21:32.789 }, 00:21:32.789 { 00:21:32.789 "name": "BaseBdev2", 00:21:32.789 "uuid": "9f409d53-8c90-4794-b409-73035fd22573", 00:21:32.789 "is_configured": true, 00:21:32.789 "data_offset": 0, 00:21:32.789 "data_size": 65536 00:21:32.789 }, 00:21:32.789 { 00:21:32.789 "name": "BaseBdev3", 00:21:32.789 "uuid": "b4589313-07b1-40de-afdb-75be67a7d5ea", 00:21:32.789 "is_configured": true, 00:21:32.789 "data_offset": 0, 00:21:32.789 "data_size": 65536 00:21:32.789 }, 00:21:32.789 { 00:21:32.789 "name": "BaseBdev4", 00:21:32.789 "uuid": "b17f74e1-7493-49c5-bda9-9f729197d69a", 00:21:32.789 "is_configured": true, 00:21:32.789 "data_offset": 0, 00:21:32.789 "data_size": 65536 00:21:32.789 } 00:21:32.789 ] 00:21:32.789 }' 00:21:32.789 12:41:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:32.789 12:41:15 -- common/autotest_common.sh@10 -- # set +x 00:21:33.356 12:41:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:33.356 12:41:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:33.356 12:41:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:33.356 12:41:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.614 12:41:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:33.614 12:41:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:33.614 12:41:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:33.614 [2024-10-01 12:41:16.085686] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:33.872 12:41:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:33.872 12:41:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:33.872 12:41:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.872 12:41:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:33.872 12:41:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:33.872 12:41:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:33.872 12:41:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:34.131 [2024-10-01 12:41:16.531799] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:34.131 12:41:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:34.131 12:41:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:34.132 12:41:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.132 12:41:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:34.390 12:41:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:34.390 12:41:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:34.390 12:41:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:34.650 [2024-10-01 12:41:16.979565] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:34.650 [2024-10-01 12:41:16.979611] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:21:34.650 12:41:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:34.650 12:41:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:34.650 12:41:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.650 12:41:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:34.909 12:41:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:34.909 12:41:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:34.909 12:41:17 -- bdev/bdev_raid.sh@287 -- # killprocess 120106 00:21:34.909 12:41:17 -- common/autotest_common.sh@926 -- # '[' -z 120106 ']' 00:21:34.909 12:41:17 -- common/autotest_common.sh@930 -- # kill -0 120106 00:21:34.909 12:41:17 -- common/autotest_common.sh@931 -- # uname 00:21:34.909 12:41:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:34.909 12:41:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120106 00:21:34.909 12:41:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:34.909 killing process with pid 120106 00:21:34.909 12:41:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:34.909 12:41:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120106' 00:21:34.909 12:41:17 -- common/autotest_common.sh@945 -- # kill 120106 00:21:34.909 [2024-10-01 12:41:17.291716] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:34.909 12:41:17 -- common/autotest_common.sh@950 -- # wait 120106 00:21:34.909 [2024-10-01 12:41:17.291868] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:35.847 12:41:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:35.847 00:21:35.847 real 0m11.879s 00:21:35.847 user 0m20.327s 00:21:35.847 sys 0m1.912s 00:21:35.847 12:41:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.847 12:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:35.847 ************************************ 00:21:35.847 END TEST raid_state_function_test 00:21:35.847 ************************************ 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:21:36.107 12:41:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:21:36.107 12:41:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:36.107 12:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:36.107 ************************************ 00:21:36.107 START TEST raid_state_function_test_sb 00:21:36.107 ************************************ 00:21:36.107 12:41:18 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=120520 00:21:36.107 Process raid pid: 120520 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120520' 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:36.107 12:41:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120520 /var/tmp/spdk-raid.sock 00:21:36.107 12:41:18 -- common/autotest_common.sh@819 -- # '[' -z 120520 ']' 00:21:36.107 12:41:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:36.107 12:41:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:36.107 12:41:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:36.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:36.107 12:41:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:36.107 12:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:36.107 [2024-10-01 12:41:18.524267] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:36.107 [2024-10-01 12:41:18.524449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.366 [2024-10-01 12:41:18.696060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.366 [2024-10-01 12:41:18.852563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.627 [2024-10-01 12:41:19.001985] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.886 12:41:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:36.886 12:41:19 -- common/autotest_common.sh@852 -- # return 0 00:21:36.886 12:41:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:37.145 [2024-10-01 12:41:19.471201] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.145 [2024-10-01 12:41:19.471269] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.145 [2024-10-01 12:41:19.471279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.145 [2024-10-01 12:41:19.471298] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.145 [2024-10-01 12:41:19.471304] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:37.145 [2024-10-01 12:41:19.471341] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:37.145 [2024-10-01 12:41:19.471347] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:37.145 [2024-10-01 12:41:19.471367] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.145 12:41:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.146 12:41:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:37.146 "name": "Existed_Raid", 00:21:37.146 "uuid": "4ec3b194-1ff0-4982-bca4-8b845410f285", 00:21:37.146 "strip_size_kb": 64, 00:21:37.146 "state": "configuring", 00:21:37.146 "raid_level": "concat", 00:21:37.146 "superblock": true, 00:21:37.146 "num_base_bdevs": 4, 00:21:37.146 "num_base_bdevs_discovered": 0, 00:21:37.146 "num_base_bdevs_operational": 4, 00:21:37.146 "base_bdevs_list": [ 00:21:37.146 { 00:21:37.146 "name": "BaseBdev1", 00:21:37.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.146 "is_configured": false, 00:21:37.146 "data_offset": 0, 00:21:37.146 "data_size": 0 00:21:37.146 }, 00:21:37.146 { 00:21:37.146 "name": "BaseBdev2", 00:21:37.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.146 "is_configured": false, 00:21:37.146 "data_offset": 0, 00:21:37.146 "data_size": 0 00:21:37.146 }, 00:21:37.146 { 00:21:37.146 "name": "BaseBdev3", 00:21:37.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.146 "is_configured": false, 00:21:37.146 "data_offset": 0, 00:21:37.146 "data_size": 0 00:21:37.146 }, 00:21:37.146 { 00:21:37.146 "name": "BaseBdev4", 00:21:37.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.146 "is_configured": false, 00:21:37.146 "data_offset": 0, 00:21:37.146 "data_size": 0 00:21:37.146 } 00:21:37.146 ] 00:21:37.146 }' 00:21:37.146 12:41:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:37.146 12:41:19 -- common/autotest_common.sh@10 -- # set +x 00:21:37.713 12:41:20 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:37.971 [2024-10-01 12:41:20.369747] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.971 [2024-10-01 12:41:20.369785] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:21:37.972 12:41:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:38.230 [2024-10-01 12:41:20.541529] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:38.230 [2024-10-01 12:41:20.541582] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:38.230 [2024-10-01 12:41:20.541590] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.230 [2024-10-01 12:41:20.541612] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.230 [2024-10-01 12:41:20.541618] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:38.230 [2024-10-01 12:41:20.541650] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:38.230 [2024-10-01 12:41:20.541655] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:38.230 [2024-10-01 12:41:20.541675] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:38.230 12:41:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:38.230 [2024-10-01 12:41:20.743700] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.230 BaseBdev1 00:21:38.230 12:41:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:38.230 12:41:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:38.230 12:41:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:38.230 12:41:20 -- common/autotest_common.sh@889 -- # local i 00:21:38.230 12:41:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:38.230 12:41:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:38.230 12:41:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:38.489 12:41:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:38.749 [ 00:21:38.749 { 00:21:38.749 "name": "BaseBdev1", 00:21:38.749 "aliases": [ 00:21:38.749 "6d160837-5fbc-4162-b3cc-38048a2999cb" 00:21:38.749 ], 00:21:38.749 "product_name": "Malloc disk", 00:21:38.749 "block_size": 512, 00:21:38.749 "num_blocks": 65536, 00:21:38.749 "uuid": "6d160837-5fbc-4162-b3cc-38048a2999cb", 00:21:38.749 "assigned_rate_limits": { 00:21:38.749 "rw_ios_per_sec": 0, 00:21:38.750 "rw_mbytes_per_sec": 0, 00:21:38.750 "r_mbytes_per_sec": 0, 00:21:38.750 "w_mbytes_per_sec": 0 00:21:38.750 }, 00:21:38.750 "claimed": true, 00:21:38.750 "claim_type": "exclusive_write", 00:21:38.750 "zoned": false, 00:21:38.750 "supported_io_types": { 00:21:38.750 "read": true, 00:21:38.750 "write": true, 00:21:38.750 "unmap": true, 00:21:38.750 "write_zeroes": true, 00:21:38.750 "flush": true, 00:21:38.750 "reset": true, 00:21:38.750 "compare": false, 00:21:38.750 "compare_and_write": false, 00:21:38.750 "abort": true, 00:21:38.750 "nvme_admin": false, 00:21:38.750 "nvme_io": false 00:21:38.750 }, 00:21:38.750 "memory_domains": [ 00:21:38.750 { 00:21:38.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.750 "dma_device_type": 2 00:21:38.750 } 00:21:38.750 ], 00:21:38.750 "driver_specific": {} 00:21:38.750 } 00:21:38.750 ] 00:21:38.750 12:41:21 -- common/autotest_common.sh@895 -- # return 0 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.750 "name": "Existed_Raid", 00:21:38.750 "uuid": "328ece0f-0948-421a-af0b-8bb8d2235350", 00:21:38.750 "strip_size_kb": 64, 00:21:38.750 "state": "configuring", 00:21:38.750 "raid_level": "concat", 00:21:38.750 "superblock": true, 00:21:38.750 "num_base_bdevs": 4, 00:21:38.750 "num_base_bdevs_discovered": 1, 00:21:38.750 "num_base_bdevs_operational": 4, 00:21:38.750 "base_bdevs_list": [ 00:21:38.750 { 00:21:38.750 "name": "BaseBdev1", 00:21:38.750 "uuid": "6d160837-5fbc-4162-b3cc-38048a2999cb", 00:21:38.750 "is_configured": true, 00:21:38.750 "data_offset": 2048, 00:21:38.750 "data_size": 63488 00:21:38.750 }, 00:21:38.750 { 00:21:38.750 "name": "BaseBdev2", 00:21:38.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.750 "is_configured": false, 00:21:38.750 "data_offset": 0, 00:21:38.750 "data_size": 0 00:21:38.750 }, 00:21:38.750 { 00:21:38.750 "name": "BaseBdev3", 00:21:38.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.750 "is_configured": false, 00:21:38.750 "data_offset": 0, 00:21:38.750 "data_size": 0 00:21:38.750 }, 00:21:38.750 { 00:21:38.750 "name": "BaseBdev4", 00:21:38.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.750 "is_configured": false, 00:21:38.750 "data_offset": 0, 00:21:38.750 "data_size": 0 00:21:38.750 } 00:21:38.750 ] 00:21:38.750 }' 00:21:38.750 12:41:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.750 12:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:39.319 12:41:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:39.578 [2024-10-01 12:41:21.969895] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:39.578 [2024-10-01 12:41:21.969976] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:39.578 12:41:21 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:21:39.578 12:41:21 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:39.837 12:41:22 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:40.096 BaseBdev1 00:21:40.096 12:41:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:21:40.096 12:41:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:40.096 12:41:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:40.096 12:41:22 -- common/autotest_common.sh@889 -- # local i 00:21:40.096 12:41:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:40.096 12:41:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:40.096 12:41:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.355 12:41:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:40.355 [ 00:21:40.355 { 00:21:40.355 "name": "BaseBdev1", 00:21:40.355 "aliases": [ 00:21:40.355 "45e0e910-7fee-4421-a2e7-01f357e2402f" 00:21:40.355 ], 00:21:40.355 "product_name": "Malloc disk", 00:21:40.355 "block_size": 512, 00:21:40.355 "num_blocks": 65536, 00:21:40.356 "uuid": "45e0e910-7fee-4421-a2e7-01f357e2402f", 00:21:40.356 "assigned_rate_limits": { 00:21:40.356 "rw_ios_per_sec": 0, 00:21:40.356 "rw_mbytes_per_sec": 0, 00:21:40.356 "r_mbytes_per_sec": 0, 00:21:40.356 "w_mbytes_per_sec": 0 00:21:40.356 }, 00:21:40.356 "claimed": false, 00:21:40.356 "zoned": false, 00:21:40.356 "supported_io_types": { 00:21:40.356 "read": true, 00:21:40.356 "write": true, 00:21:40.356 "unmap": true, 00:21:40.356 "write_zeroes": true, 00:21:40.356 "flush": true, 00:21:40.356 "reset": true, 00:21:40.356 "compare": false, 00:21:40.356 "compare_and_write": false, 00:21:40.356 "abort": true, 00:21:40.356 "nvme_admin": false, 00:21:40.356 "nvme_io": false 00:21:40.356 }, 00:21:40.356 "memory_domains": [ 00:21:40.356 { 00:21:40.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.356 "dma_device_type": 2 00:21:40.356 } 00:21:40.356 ], 00:21:40.356 "driver_specific": {} 00:21:40.356 } 00:21:40.356 ] 00:21:40.356 12:41:22 -- common/autotest_common.sh@895 -- # return 0 00:21:40.356 12:41:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:40.615 [2024-10-01 12:41:22.984071] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.615 [2024-10-01 12:41:22.986053] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:40.615 [2024-10-01 12:41:22.986147] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:40.615 [2024-10-01 12:41:22.986159] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:40.615 [2024-10-01 12:41:22.986185] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:40.615 [2024-10-01 12:41:22.986193] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:40.615 [2024-10-01 12:41:22.986212] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.615 12:41:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.876 12:41:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:40.876 "name": "Existed_Raid", 00:21:40.876 "uuid": "cc8433ad-6d92-4c26-8441-19f4fe25b47f", 00:21:40.876 "strip_size_kb": 64, 00:21:40.876 "state": "configuring", 00:21:40.876 "raid_level": "concat", 00:21:40.876 "superblock": true, 00:21:40.876 "num_base_bdevs": 4, 00:21:40.876 "num_base_bdevs_discovered": 1, 00:21:40.876 "num_base_bdevs_operational": 4, 00:21:40.876 "base_bdevs_list": [ 00:21:40.876 { 00:21:40.876 "name": "BaseBdev1", 00:21:40.876 "uuid": "45e0e910-7fee-4421-a2e7-01f357e2402f", 00:21:40.876 "is_configured": true, 00:21:40.876 "data_offset": 2048, 00:21:40.876 "data_size": 63488 00:21:40.876 }, 00:21:40.876 { 00:21:40.876 "name": "BaseBdev2", 00:21:40.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.876 "is_configured": false, 00:21:40.876 "data_offset": 0, 00:21:40.876 "data_size": 0 00:21:40.876 }, 00:21:40.876 { 00:21:40.876 "name": "BaseBdev3", 00:21:40.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.876 "is_configured": false, 00:21:40.876 "data_offset": 0, 00:21:40.876 "data_size": 0 00:21:40.876 }, 00:21:40.876 { 00:21:40.876 "name": "BaseBdev4", 00:21:40.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.876 "is_configured": false, 00:21:40.876 "data_offset": 0, 00:21:40.876 "data_size": 0 00:21:40.876 } 00:21:40.876 ] 00:21:40.876 }' 00:21:40.876 12:41:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:40.876 12:41:23 -- common/autotest_common.sh@10 -- # set +x 00:21:41.136 12:41:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:41.395 [2024-10-01 12:41:23.817367] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:41.395 BaseBdev2 00:21:41.395 12:41:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:41.395 12:41:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:21:41.395 12:41:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:41.395 12:41:23 -- common/autotest_common.sh@889 -- # local i 00:21:41.395 12:41:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:41.395 12:41:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:41.395 12:41:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:41.654 12:41:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:41.654 [ 00:21:41.654 { 00:21:41.654 "name": "BaseBdev2", 00:21:41.654 "aliases": [ 00:21:41.654 "1868abbb-78b7-4cfa-bc42-d117d4cda61d" 00:21:41.654 ], 00:21:41.654 "product_name": "Malloc disk", 00:21:41.654 "block_size": 512, 00:21:41.654 "num_blocks": 65536, 00:21:41.654 "uuid": "1868abbb-78b7-4cfa-bc42-d117d4cda61d", 00:21:41.654 "assigned_rate_limits": { 00:21:41.654 "rw_ios_per_sec": 0, 00:21:41.654 "rw_mbytes_per_sec": 0, 00:21:41.654 "r_mbytes_per_sec": 0, 00:21:41.654 "w_mbytes_per_sec": 0 00:21:41.654 }, 00:21:41.654 "claimed": true, 00:21:41.654 "claim_type": "exclusive_write", 00:21:41.654 "zoned": false, 00:21:41.654 "supported_io_types": { 00:21:41.654 "read": true, 00:21:41.655 "write": true, 00:21:41.655 "unmap": true, 00:21:41.655 "write_zeroes": true, 00:21:41.655 "flush": true, 00:21:41.655 "reset": true, 00:21:41.655 "compare": false, 00:21:41.655 "compare_and_write": false, 00:21:41.655 "abort": true, 00:21:41.655 "nvme_admin": false, 00:21:41.655 "nvme_io": false 00:21:41.655 }, 00:21:41.655 "memory_domains": [ 00:21:41.655 { 00:21:41.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.655 "dma_device_type": 2 00:21:41.655 } 00:21:41.655 ], 00:21:41.655 "driver_specific": {} 00:21:41.655 } 00:21:41.655 ] 00:21:41.655 12:41:24 -- common/autotest_common.sh@895 -- # return 0 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.655 12:41:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.914 12:41:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:41.914 "name": "Existed_Raid", 00:21:41.914 "uuid": "cc8433ad-6d92-4c26-8441-19f4fe25b47f", 00:21:41.914 "strip_size_kb": 64, 00:21:41.914 "state": "configuring", 00:21:41.914 "raid_level": "concat", 00:21:41.914 "superblock": true, 00:21:41.914 "num_base_bdevs": 4, 00:21:41.914 "num_base_bdevs_discovered": 2, 00:21:41.914 "num_base_bdevs_operational": 4, 00:21:41.914 "base_bdevs_list": [ 00:21:41.914 { 00:21:41.914 "name": "BaseBdev1", 00:21:41.914 "uuid": "45e0e910-7fee-4421-a2e7-01f357e2402f", 00:21:41.914 "is_configured": true, 00:21:41.914 "data_offset": 2048, 00:21:41.914 "data_size": 63488 00:21:41.914 }, 00:21:41.914 { 00:21:41.914 "name": "BaseBdev2", 00:21:41.914 "uuid": "1868abbb-78b7-4cfa-bc42-d117d4cda61d", 00:21:41.914 "is_configured": true, 00:21:41.914 "data_offset": 2048, 00:21:41.914 "data_size": 63488 00:21:41.914 }, 00:21:41.914 { 00:21:41.914 "name": "BaseBdev3", 00:21:41.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.914 "is_configured": false, 00:21:41.914 "data_offset": 0, 00:21:41.914 "data_size": 0 00:21:41.914 }, 00:21:41.914 { 00:21:41.914 "name": "BaseBdev4", 00:21:41.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.914 "is_configured": false, 00:21:41.914 "data_offset": 0, 00:21:41.914 "data_size": 0 00:21:41.914 } 00:21:41.914 ] 00:21:41.914 }' 00:21:41.914 12:41:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:41.914 12:41:24 -- common/autotest_common.sh@10 -- # set +x 00:21:42.480 12:41:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:42.738 [2024-10-01 12:41:25.050489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:42.738 BaseBdev3 00:21:42.738 12:41:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:42.738 12:41:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:21:42.738 12:41:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:42.738 12:41:25 -- common/autotest_common.sh@889 -- # local i 00:21:42.738 12:41:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:42.738 12:41:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:42.738 12:41:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:42.738 12:41:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:42.996 [ 00:21:42.996 { 00:21:42.996 "name": "BaseBdev3", 00:21:42.996 "aliases": [ 00:21:42.996 "3d3c471d-97dd-45ce-9aef-1668cf3acb56" 00:21:42.996 ], 00:21:42.996 "product_name": "Malloc disk", 00:21:42.996 "block_size": 512, 00:21:42.996 "num_blocks": 65536, 00:21:42.997 "uuid": "3d3c471d-97dd-45ce-9aef-1668cf3acb56", 00:21:42.997 "assigned_rate_limits": { 00:21:42.997 "rw_ios_per_sec": 0, 00:21:42.997 "rw_mbytes_per_sec": 0, 00:21:42.997 "r_mbytes_per_sec": 0, 00:21:42.997 "w_mbytes_per_sec": 0 00:21:42.997 }, 00:21:42.997 "claimed": true, 00:21:42.997 "claim_type": "exclusive_write", 00:21:42.997 "zoned": false, 00:21:42.997 "supported_io_types": { 00:21:42.997 "read": true, 00:21:42.997 "write": true, 00:21:42.997 "unmap": true, 00:21:42.997 "write_zeroes": true, 00:21:42.997 "flush": true, 00:21:42.997 "reset": true, 00:21:42.997 "compare": false, 00:21:42.997 "compare_and_write": false, 00:21:42.997 "abort": true, 00:21:42.997 "nvme_admin": false, 00:21:42.997 "nvme_io": false 00:21:42.997 }, 00:21:42.997 "memory_domains": [ 00:21:42.997 { 00:21:42.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.997 "dma_device_type": 2 00:21:42.997 } 00:21:42.997 ], 00:21:42.997 "driver_specific": {} 00:21:42.997 } 00:21:42.997 ] 00:21:42.997 12:41:25 -- common/autotest_common.sh@895 -- # return 0 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.997 12:41:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.255 12:41:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:43.255 "name": "Existed_Raid", 00:21:43.255 "uuid": "cc8433ad-6d92-4c26-8441-19f4fe25b47f", 00:21:43.255 "strip_size_kb": 64, 00:21:43.255 "state": "configuring", 00:21:43.255 "raid_level": "concat", 00:21:43.255 "superblock": true, 00:21:43.255 "num_base_bdevs": 4, 00:21:43.255 "num_base_bdevs_discovered": 3, 00:21:43.255 "num_base_bdevs_operational": 4, 00:21:43.255 "base_bdevs_list": [ 00:21:43.255 { 00:21:43.255 "name": "BaseBdev1", 00:21:43.255 "uuid": "45e0e910-7fee-4421-a2e7-01f357e2402f", 00:21:43.255 "is_configured": true, 00:21:43.255 "data_offset": 2048, 00:21:43.255 "data_size": 63488 00:21:43.255 }, 00:21:43.255 { 00:21:43.255 "name": "BaseBdev2", 00:21:43.255 "uuid": "1868abbb-78b7-4cfa-bc42-d117d4cda61d", 00:21:43.255 "is_configured": true, 00:21:43.255 "data_offset": 2048, 00:21:43.255 "data_size": 63488 00:21:43.255 }, 00:21:43.255 { 00:21:43.255 "name": "BaseBdev3", 00:21:43.255 "uuid": "3d3c471d-97dd-45ce-9aef-1668cf3acb56", 00:21:43.255 "is_configured": true, 00:21:43.255 "data_offset": 2048, 00:21:43.255 "data_size": 63488 00:21:43.255 }, 00:21:43.255 { 00:21:43.255 "name": "BaseBdev4", 00:21:43.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.255 "is_configured": false, 00:21:43.255 "data_offset": 0, 00:21:43.255 "data_size": 0 00:21:43.255 } 00:21:43.255 ] 00:21:43.255 }' 00:21:43.255 12:41:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:43.255 12:41:25 -- common/autotest_common.sh@10 -- # set +x 00:21:43.819 12:41:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:43.819 [2024-10-01 12:41:26.265857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:43.819 [2024-10-01 12:41:26.266273] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:21:43.819 [2024-10-01 12:41:26.266385] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:43.819 [2024-10-01 12:41:26.266532] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:43.819 [2024-10-01 12:41:26.266857] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:21:43.819 [2024-10-01 12:41:26.266965] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:21:43.819 [2024-10-01 12:41:26.267183] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.819 BaseBdev4 00:21:43.819 12:41:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:43.819 12:41:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:21:43.819 12:41:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:43.819 12:41:26 -- common/autotest_common.sh@889 -- # local i 00:21:43.819 12:41:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:43.819 12:41:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:43.819 12:41:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:44.078 12:41:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:44.078 [ 00:21:44.078 { 00:21:44.078 "name": "BaseBdev4", 00:21:44.078 "aliases": [ 00:21:44.078 "07347383-b8c6-46ee-bfdd-ff3501f09303" 00:21:44.078 ], 00:21:44.078 "product_name": "Malloc disk", 00:21:44.078 "block_size": 512, 00:21:44.078 "num_blocks": 65536, 00:21:44.078 "uuid": "07347383-b8c6-46ee-bfdd-ff3501f09303", 00:21:44.078 "assigned_rate_limits": { 00:21:44.078 "rw_ios_per_sec": 0, 00:21:44.078 "rw_mbytes_per_sec": 0, 00:21:44.078 "r_mbytes_per_sec": 0, 00:21:44.078 "w_mbytes_per_sec": 0 00:21:44.078 }, 00:21:44.078 "claimed": true, 00:21:44.078 "claim_type": "exclusive_write", 00:21:44.078 "zoned": false, 00:21:44.078 "supported_io_types": { 00:21:44.078 "read": true, 00:21:44.078 "write": true, 00:21:44.078 "unmap": true, 00:21:44.078 "write_zeroes": true, 00:21:44.078 "flush": true, 00:21:44.078 "reset": true, 00:21:44.078 "compare": false, 00:21:44.078 "compare_and_write": false, 00:21:44.078 "abort": true, 00:21:44.078 "nvme_admin": false, 00:21:44.078 "nvme_io": false 00:21:44.078 }, 00:21:44.078 "memory_domains": [ 00:21:44.078 { 00:21:44.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.078 "dma_device_type": 2 00:21:44.078 } 00:21:44.078 ], 00:21:44.078 "driver_specific": {} 00:21:44.078 } 00:21:44.078 ] 00:21:44.336 12:41:26 -- common/autotest_common.sh@895 -- # return 0 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:44.336 "name": "Existed_Raid", 00:21:44.336 "uuid": "cc8433ad-6d92-4c26-8441-19f4fe25b47f", 00:21:44.336 "strip_size_kb": 64, 00:21:44.336 "state": "online", 00:21:44.336 "raid_level": "concat", 00:21:44.336 "superblock": true, 00:21:44.336 "num_base_bdevs": 4, 00:21:44.336 "num_base_bdevs_discovered": 4, 00:21:44.336 "num_base_bdevs_operational": 4, 00:21:44.336 "base_bdevs_list": [ 00:21:44.336 { 00:21:44.336 "name": "BaseBdev1", 00:21:44.336 "uuid": "45e0e910-7fee-4421-a2e7-01f357e2402f", 00:21:44.336 "is_configured": true, 00:21:44.336 "data_offset": 2048, 00:21:44.336 "data_size": 63488 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "name": "BaseBdev2", 00:21:44.336 "uuid": "1868abbb-78b7-4cfa-bc42-d117d4cda61d", 00:21:44.336 "is_configured": true, 00:21:44.336 "data_offset": 2048, 00:21:44.336 "data_size": 63488 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "name": "BaseBdev3", 00:21:44.336 "uuid": "3d3c471d-97dd-45ce-9aef-1668cf3acb56", 00:21:44.336 "is_configured": true, 00:21:44.336 "data_offset": 2048, 00:21:44.336 "data_size": 63488 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "name": "BaseBdev4", 00:21:44.336 "uuid": "07347383-b8c6-46ee-bfdd-ff3501f09303", 00:21:44.336 "is_configured": true, 00:21:44.336 "data_offset": 2048, 00:21:44.336 "data_size": 63488 00:21:44.336 } 00:21:44.336 ] 00:21:44.336 }' 00:21:44.336 12:41:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:44.336 12:41:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.902 12:41:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:45.161 [2024-10-01 12:41:27.516071] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:45.161 [2024-10-01 12:41:27.516192] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:45.161 [2024-10-01 12:41:27.516395] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.161 12:41:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.420 12:41:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.420 "name": "Existed_Raid", 00:21:45.420 "uuid": "cc8433ad-6d92-4c26-8441-19f4fe25b47f", 00:21:45.420 "strip_size_kb": 64, 00:21:45.420 "state": "offline", 00:21:45.420 "raid_level": "concat", 00:21:45.420 "superblock": true, 00:21:45.420 "num_base_bdevs": 4, 00:21:45.420 "num_base_bdevs_discovered": 3, 00:21:45.420 "num_base_bdevs_operational": 3, 00:21:45.420 "base_bdevs_list": [ 00:21:45.420 { 00:21:45.420 "name": null, 00:21:45.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.420 "is_configured": false, 00:21:45.420 "data_offset": 2048, 00:21:45.420 "data_size": 63488 00:21:45.420 }, 00:21:45.420 { 00:21:45.420 "name": "BaseBdev2", 00:21:45.420 "uuid": "1868abbb-78b7-4cfa-bc42-d117d4cda61d", 00:21:45.420 "is_configured": true, 00:21:45.420 "data_offset": 2048, 00:21:45.420 "data_size": 63488 00:21:45.420 }, 00:21:45.420 { 00:21:45.420 "name": "BaseBdev3", 00:21:45.420 "uuid": "3d3c471d-97dd-45ce-9aef-1668cf3acb56", 00:21:45.420 "is_configured": true, 00:21:45.420 "data_offset": 2048, 00:21:45.420 "data_size": 63488 00:21:45.420 }, 00:21:45.420 { 00:21:45.420 "name": "BaseBdev4", 00:21:45.420 "uuid": "07347383-b8c6-46ee-bfdd-ff3501f09303", 00:21:45.420 "is_configured": true, 00:21:45.420 "data_offset": 2048, 00:21:45.420 "data_size": 63488 00:21:45.420 } 00:21:45.420 ] 00:21:45.420 }' 00:21:45.420 12:41:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.420 12:41:27 -- common/autotest_common.sh@10 -- # set +x 00:21:46.020 12:41:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:46.020 12:41:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:46.020 12:41:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.020 12:41:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:46.020 12:41:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:46.020 12:41:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:46.020 12:41:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:46.296 [2024-10-01 12:41:28.634087] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:46.296 12:41:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:46.296 12:41:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:46.296 12:41:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.296 12:41:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:46.554 12:41:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:46.554 12:41:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:46.555 12:41:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:46.555 [2024-10-01 12:41:29.074170] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:46.812 12:41:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:46.812 12:41:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:46.812 12:41:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.812 12:41:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:47.070 12:41:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:47.070 12:41:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:47.070 12:41:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:47.070 [2024-10-01 12:41:29.529853] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:47.070 [2024-10-01 12:41:29.530017] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:21:47.329 12:41:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:47.329 12:41:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:47.329 12:41:29 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.329 12:41:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:47.329 12:41:29 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:47.330 12:41:29 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:47.330 12:41:29 -- bdev/bdev_raid.sh@287 -- # killprocess 120520 00:21:47.330 12:41:29 -- common/autotest_common.sh@926 -- # '[' -z 120520 ']' 00:21:47.330 12:41:29 -- common/autotest_common.sh@930 -- # kill -0 120520 00:21:47.330 12:41:29 -- common/autotest_common.sh@931 -- # uname 00:21:47.330 12:41:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:47.330 12:41:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120520 00:21:47.330 12:41:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:47.330 12:41:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:47.330 12:41:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120520' 00:21:47.330 killing process with pid 120520 00:21:47.330 12:41:29 -- common/autotest_common.sh@945 -- # kill 120520 00:21:47.330 [2024-10-01 12:41:29.841645] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:47.330 12:41:29 -- common/autotest_common.sh@950 -- # wait 120520 00:21:47.330 [2024-10-01 12:41:29.841875] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:48.708 ************************************ 00:21:48.708 END TEST raid_state_function_test_sb 00:21:48.708 ************************************ 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:48.708 00:21:48.708 real 0m12.465s 00:21:48.708 user 0m21.196s 00:21:48.708 sys 0m2.104s 00:21:48.708 12:41:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.708 12:41:30 -- common/autotest_common.sh@10 -- # set +x 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:21:48.708 12:41:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:21:48.708 12:41:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:48.708 12:41:30 -- common/autotest_common.sh@10 -- # set +x 00:21:48.708 ************************************ 00:21:48.708 START TEST raid_superblock_test 00:21:48.708 ************************************ 00:21:48.708 12:41:30 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:21:48.708 12:41:30 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:21:48.708 12:41:31 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:21:48.708 12:41:31 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:21:48.708 12:41:31 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:21:48.708 12:41:31 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:21:48.708 12:41:31 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:21:48.708 12:41:31 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:21:48.708 12:41:31 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:21:48.708 12:41:31 -- bdev/bdev_raid.sh@357 -- # raid_pid=120942 00:21:48.708 12:41:31 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:48.708 12:41:31 -- bdev/bdev_raid.sh@358 -- # waitforlisten 120942 /var/tmp/spdk-raid.sock 00:21:48.708 12:41:31 -- common/autotest_common.sh@819 -- # '[' -z 120942 ']' 00:21:48.708 12:41:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:48.708 12:41:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:48.708 12:41:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:48.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:48.708 12:41:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:48.708 12:41:31 -- common/autotest_common.sh@10 -- # set +x 00:21:48.708 [2024-10-01 12:41:31.076783] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:48.708 [2024-10-01 12:41:31.077144] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120942 ] 00:21:48.968 [2024-10-01 12:41:31.241332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.968 [2024-10-01 12:41:31.400836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.227 [2024-10-01 12:41:31.545440] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.486 12:41:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:49.486 12:41:31 -- common/autotest_common.sh@852 -- # return 0 00:21:49.486 12:41:31 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:21:49.486 12:41:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:49.486 12:41:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:21:49.486 12:41:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:21:49.486 12:41:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:49.486 12:41:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:49.486 12:41:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:49.486 12:41:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:49.486 12:41:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:49.745 malloc1 00:21:49.745 12:41:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:49.745 [2024-10-01 12:41:32.238585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:49.745 [2024-10-01 12:41:32.238815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.745 [2024-10-01 12:41:32.238895] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:49.745 [2024-10-01 12:41:32.239028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.745 [2024-10-01 12:41:32.241427] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.745 [2024-10-01 12:41:32.241594] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:49.745 pt1 00:21:49.745 12:41:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:49.745 12:41:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:49.745 12:41:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:21:49.745 12:41:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:21:49.745 12:41:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:49.745 12:41:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:49.745 12:41:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:49.745 12:41:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:49.745 12:41:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:50.005 malloc2 00:21:50.005 12:41:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:50.264 [2024-10-01 12:41:32.649408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:50.264 [2024-10-01 12:41:32.649613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.264 [2024-10-01 12:41:32.649701] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:50.264 [2024-10-01 12:41:32.649833] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.264 [2024-10-01 12:41:32.652085] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.264 [2024-10-01 12:41:32.652241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:50.264 pt2 00:21:50.264 12:41:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:50.264 12:41:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:50.264 12:41:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:21:50.264 12:41:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:21:50.264 12:41:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:50.264 12:41:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:50.264 12:41:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:50.264 12:41:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:50.264 12:41:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:50.523 malloc3 00:21:50.523 12:41:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:50.523 [2024-10-01 12:41:33.050149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:50.523 [2024-10-01 12:41:33.050354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.523 [2024-10-01 12:41:33.050439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:50.524 [2024-10-01 12:41:33.050542] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.524 [2024-10-01 12:41:33.052707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.524 [2024-10-01 12:41:33.052856] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:50.524 pt3 00:21:50.782 12:41:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:50.782 12:41:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:50.782 12:41:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:21:50.782 12:41:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:21:50.782 12:41:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:50.782 12:41:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:50.783 12:41:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:50.783 12:41:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:50.783 12:41:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:50.783 malloc4 00:21:50.783 12:41:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:51.041 [2024-10-01 12:41:33.447706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:51.041 [2024-10-01 12:41:33.447937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.041 [2024-10-01 12:41:33.447998] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:51.041 [2024-10-01 12:41:33.448098] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.041 [2024-10-01 12:41:33.450309] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.041 [2024-10-01 12:41:33.450478] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:51.041 pt4 00:21:51.041 12:41:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:51.041 12:41:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:51.041 12:41:33 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:51.300 [2024-10-01 12:41:33.631634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:51.300 [2024-10-01 12:41:33.633592] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:51.300 [2024-10-01 12:41:33.633785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:51.300 [2024-10-01 12:41:33.633880] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:51.300 [2024-10-01 12:41:33.634135] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:21:51.300 [2024-10-01 12:41:33.634224] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:51.300 [2024-10-01 12:41:33.634360] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:51.300 [2024-10-01 12:41:33.634734] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:21:51.300 [2024-10-01 12:41:33.634831] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:21:51.300 [2024-10-01 12:41:33.635023] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.300 12:41:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.560 12:41:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.560 "name": "raid_bdev1", 00:21:51.560 "uuid": "43cb35d1-9b37-4139-b16b-7e3d684f3237", 00:21:51.560 "strip_size_kb": 64, 00:21:51.560 "state": "online", 00:21:51.560 "raid_level": "concat", 00:21:51.560 "superblock": true, 00:21:51.560 "num_base_bdevs": 4, 00:21:51.560 "num_base_bdevs_discovered": 4, 00:21:51.560 "num_base_bdevs_operational": 4, 00:21:51.560 "base_bdevs_list": [ 00:21:51.560 { 00:21:51.560 "name": "pt1", 00:21:51.560 "uuid": "96a61714-ebe6-53aa-9648-0036cc36a7c1", 00:21:51.560 "is_configured": true, 00:21:51.560 "data_offset": 2048, 00:21:51.560 "data_size": 63488 00:21:51.560 }, 00:21:51.560 { 00:21:51.560 "name": "pt2", 00:21:51.560 "uuid": "23eed36b-3ebe-5e9d-b582-80ca38b473d5", 00:21:51.560 "is_configured": true, 00:21:51.560 "data_offset": 2048, 00:21:51.560 "data_size": 63488 00:21:51.560 }, 00:21:51.560 { 00:21:51.560 "name": "pt3", 00:21:51.560 "uuid": "c23a0f8e-e1ee-5a19-8e4a-15da71a1cc68", 00:21:51.560 "is_configured": true, 00:21:51.560 "data_offset": 2048, 00:21:51.560 "data_size": 63488 00:21:51.560 }, 00:21:51.560 { 00:21:51.560 "name": "pt4", 00:21:51.560 "uuid": "8412c357-1721-5b9f-ad6d-58c4978f4728", 00:21:51.560 "is_configured": true, 00:21:51.560 "data_offset": 2048, 00:21:51.560 "data_size": 63488 00:21:51.560 } 00:21:51.560 ] 00:21:51.560 }' 00:21:51.560 12:41:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.560 12:41:33 -- common/autotest_common.sh@10 -- # set +x 00:21:52.127 12:41:34 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:21:52.127 12:41:34 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:52.127 [2024-10-01 12:41:34.514436] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:52.127 12:41:34 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=43cb35d1-9b37-4139-b16b-7e3d684f3237 00:21:52.127 12:41:34 -- bdev/bdev_raid.sh@380 -- # '[' -z 43cb35d1-9b37-4139-b16b-7e3d684f3237 ']' 00:21:52.127 12:41:34 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:52.385 [2024-10-01 12:41:34.701950] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:52.385 [2024-10-01 12:41:34.702081] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.385 [2024-10-01 12:41:34.702289] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.385 [2024-10-01 12:41:34.702378] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.385 [2024-10-01 12:41:34.702560] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:21:52.385 12:41:34 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.385 12:41:34 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:21:52.385 12:41:34 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:21:52.385 12:41:34 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:21:52.385 12:41:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:52.385 12:41:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:52.644 12:41:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:52.644 12:41:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:52.904 12:41:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:52.904 12:41:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:53.163 12:41:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:53.163 12:41:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:53.163 12:41:35 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:53.163 12:41:35 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:53.421 12:41:35 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:21:53.421 12:41:35 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:53.421 12:41:35 -- common/autotest_common.sh@640 -- # local es=0 00:21:53.421 12:41:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:53.421 12:41:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.421 12:41:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:53.421 12:41:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.421 12:41:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:53.421 12:41:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.421 12:41:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:53.421 12:41:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.421 12:41:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:53.421 12:41:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:53.681 [2024-10-01 12:41:35.984082] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:53.681 [2024-10-01 12:41:35.986097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:53.681 [2024-10-01 12:41:35.986272] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:53.681 [2024-10-01 12:41:35.986341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:53.681 [2024-10-01 12:41:35.986495] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:21:53.681 [2024-10-01 12:41:35.986663] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:21:53.681 [2024-10-01 12:41:35.986726] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:21:53.681 [2024-10-01 12:41:35.986844] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:21:53.681 [2024-10-01 12:41:35.986948] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:53.681 [2024-10-01 12:41:35.986983] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:21:53.681 request: 00:21:53.681 { 00:21:53.681 "name": "raid_bdev1", 00:21:53.681 "raid_level": "concat", 00:21:53.681 "base_bdevs": [ 00:21:53.681 "malloc1", 00:21:53.681 "malloc2", 00:21:53.681 "malloc3", 00:21:53.681 "malloc4" 00:21:53.681 ], 00:21:53.681 "superblock": false, 00:21:53.681 "strip_size_kb": 64, 00:21:53.681 "method": "bdev_raid_create", 00:21:53.681 "req_id": 1 00:21:53.681 } 00:21:53.681 Got JSON-RPC error response 00:21:53.681 response: 00:21:53.681 { 00:21:53.681 "code": -17, 00:21:53.681 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:53.681 } 00:21:53.681 12:41:35 -- common/autotest_common.sh@643 -- # es=1 00:21:53.681 12:41:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:53.681 12:41:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:53.681 12:41:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:53.681 12:41:36 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.681 12:41:36 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:21:53.681 12:41:36 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:21:53.681 12:41:36 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:21:53.681 12:41:36 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:53.940 [2024-10-01 12:41:36.323597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:53.940 [2024-10-01 12:41:36.323782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.940 [2024-10-01 12:41:36.323838] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:53.940 [2024-10-01 12:41:36.323936] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.940 [2024-10-01 12:41:36.326089] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.940 [2024-10-01 12:41:36.326254] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:53.940 [2024-10-01 12:41:36.326432] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:53.940 [2024-10-01 12:41:36.326504] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:53.940 pt1 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.940 12:41:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.199 12:41:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.199 "name": "raid_bdev1", 00:21:54.199 "uuid": "43cb35d1-9b37-4139-b16b-7e3d684f3237", 00:21:54.199 "strip_size_kb": 64, 00:21:54.199 "state": "configuring", 00:21:54.199 "raid_level": "concat", 00:21:54.199 "superblock": true, 00:21:54.199 "num_base_bdevs": 4, 00:21:54.199 "num_base_bdevs_discovered": 1, 00:21:54.199 "num_base_bdevs_operational": 4, 00:21:54.199 "base_bdevs_list": [ 00:21:54.199 { 00:21:54.199 "name": "pt1", 00:21:54.199 "uuid": "96a61714-ebe6-53aa-9648-0036cc36a7c1", 00:21:54.199 "is_configured": true, 00:21:54.199 "data_offset": 2048, 00:21:54.199 "data_size": 63488 00:21:54.199 }, 00:21:54.199 { 00:21:54.199 "name": null, 00:21:54.199 "uuid": "23eed36b-3ebe-5e9d-b582-80ca38b473d5", 00:21:54.199 "is_configured": false, 00:21:54.199 "data_offset": 2048, 00:21:54.199 "data_size": 63488 00:21:54.199 }, 00:21:54.199 { 00:21:54.199 "name": null, 00:21:54.199 "uuid": "c23a0f8e-e1ee-5a19-8e4a-15da71a1cc68", 00:21:54.199 "is_configured": false, 00:21:54.199 "data_offset": 2048, 00:21:54.199 "data_size": 63488 00:21:54.199 }, 00:21:54.199 { 00:21:54.199 "name": null, 00:21:54.199 "uuid": "8412c357-1721-5b9f-ad6d-58c4978f4728", 00:21:54.199 "is_configured": false, 00:21:54.199 "data_offset": 2048, 00:21:54.199 "data_size": 63488 00:21:54.199 } 00:21:54.199 ] 00:21:54.199 }' 00:21:54.199 12:41:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.199 12:41:36 -- common/autotest_common.sh@10 -- # set +x 00:21:54.768 12:41:37 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:21:54.768 12:41:37 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:54.768 [2024-10-01 12:41:37.238259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:54.768 [2024-10-01 12:41:37.238464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.768 [2024-10-01 12:41:37.238548] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:54.768 [2024-10-01 12:41:37.238648] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.768 [2024-10-01 12:41:37.239088] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.768 [2024-10-01 12:41:37.239232] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:54.768 [2024-10-01 12:41:37.239407] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:54.768 [2024-10-01 12:41:37.239509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:54.768 pt2 00:21:54.768 12:41:37 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:55.027 [2024-10-01 12:41:37.421991] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.027 12:41:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.286 12:41:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.286 "name": "raid_bdev1", 00:21:55.286 "uuid": "43cb35d1-9b37-4139-b16b-7e3d684f3237", 00:21:55.286 "strip_size_kb": 64, 00:21:55.286 "state": "configuring", 00:21:55.286 "raid_level": "concat", 00:21:55.286 "superblock": true, 00:21:55.286 "num_base_bdevs": 4, 00:21:55.286 "num_base_bdevs_discovered": 1, 00:21:55.286 "num_base_bdevs_operational": 4, 00:21:55.286 "base_bdevs_list": [ 00:21:55.286 { 00:21:55.286 "name": "pt1", 00:21:55.286 "uuid": "96a61714-ebe6-53aa-9648-0036cc36a7c1", 00:21:55.286 "is_configured": true, 00:21:55.286 "data_offset": 2048, 00:21:55.286 "data_size": 63488 00:21:55.286 }, 00:21:55.286 { 00:21:55.286 "name": null, 00:21:55.286 "uuid": "23eed36b-3ebe-5e9d-b582-80ca38b473d5", 00:21:55.286 "is_configured": false, 00:21:55.286 "data_offset": 2048, 00:21:55.286 "data_size": 63488 00:21:55.286 }, 00:21:55.286 { 00:21:55.286 "name": null, 00:21:55.286 "uuid": "c23a0f8e-e1ee-5a19-8e4a-15da71a1cc68", 00:21:55.286 "is_configured": false, 00:21:55.286 "data_offset": 2048, 00:21:55.286 "data_size": 63488 00:21:55.286 }, 00:21:55.286 { 00:21:55.286 "name": null, 00:21:55.286 "uuid": "8412c357-1721-5b9f-ad6d-58c4978f4728", 00:21:55.286 "is_configured": false, 00:21:55.286 "data_offset": 2048, 00:21:55.286 "data_size": 63488 00:21:55.286 } 00:21:55.286 ] 00:21:55.286 }' 00:21:55.286 12:41:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.286 12:41:37 -- common/autotest_common.sh@10 -- # set +x 00:21:55.853 12:41:38 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:21:55.853 12:41:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:55.853 12:41:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:55.853 [2024-10-01 12:41:38.328693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:55.854 [2024-10-01 12:41:38.328892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.854 [2024-10-01 12:41:38.328957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:55.854 [2024-10-01 12:41:38.329052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.854 [2024-10-01 12:41:38.329467] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.854 [2024-10-01 12:41:38.329613] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:55.854 [2024-10-01 12:41:38.329782] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:55.854 [2024-10-01 12:41:38.329867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:55.854 pt2 00:21:55.854 12:41:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:55.854 12:41:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:55.854 12:41:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:56.112 [2024-10-01 12:41:38.508421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:56.112 [2024-10-01 12:41:38.508632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.112 [2024-10-01 12:41:38.508688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:56.112 [2024-10-01 12:41:38.508780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.112 [2024-10-01 12:41:38.509178] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.112 [2024-10-01 12:41:38.509314] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:56.112 [2024-10-01 12:41:38.509482] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:56.112 [2024-10-01 12:41:38.509571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:56.112 pt3 00:21:56.112 12:41:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:56.112 12:41:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:56.112 12:41:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:56.371 [2024-10-01 12:41:38.668195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:56.371 [2024-10-01 12:41:38.668384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.371 [2024-10-01 12:41:38.668445] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:56.371 [2024-10-01 12:41:38.668525] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.371 [2024-10-01 12:41:38.668892] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.371 [2024-10-01 12:41:38.669019] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:56.371 [2024-10-01 12:41:38.669185] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:56.371 [2024-10-01 12:41:38.669262] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:56.372 [2024-10-01 12:41:38.669442] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:21:56.372 [2024-10-01 12:41:38.669518] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:56.372 [2024-10-01 12:41:38.669670] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:56.372 [2024-10-01 12:41:38.669976] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:21:56.372 [2024-10-01 12:41:38.670069] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:21:56.372 [2024-10-01 12:41:38.670282] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.372 pt4 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:56.372 "name": "raid_bdev1", 00:21:56.372 "uuid": "43cb35d1-9b37-4139-b16b-7e3d684f3237", 00:21:56.372 "strip_size_kb": 64, 00:21:56.372 "state": "online", 00:21:56.372 "raid_level": "concat", 00:21:56.372 "superblock": true, 00:21:56.372 "num_base_bdevs": 4, 00:21:56.372 "num_base_bdevs_discovered": 4, 00:21:56.372 "num_base_bdevs_operational": 4, 00:21:56.372 "base_bdevs_list": [ 00:21:56.372 { 00:21:56.372 "name": "pt1", 00:21:56.372 "uuid": "96a61714-ebe6-53aa-9648-0036cc36a7c1", 00:21:56.372 "is_configured": true, 00:21:56.372 "data_offset": 2048, 00:21:56.372 "data_size": 63488 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "name": "pt2", 00:21:56.372 "uuid": "23eed36b-3ebe-5e9d-b582-80ca38b473d5", 00:21:56.372 "is_configured": true, 00:21:56.372 "data_offset": 2048, 00:21:56.372 "data_size": 63488 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "name": "pt3", 00:21:56.372 "uuid": "c23a0f8e-e1ee-5a19-8e4a-15da71a1cc68", 00:21:56.372 "is_configured": true, 00:21:56.372 "data_offset": 2048, 00:21:56.372 "data_size": 63488 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "name": "pt4", 00:21:56.372 "uuid": "8412c357-1721-5b9f-ad6d-58c4978f4728", 00:21:56.372 "is_configured": true, 00:21:56.372 "data_offset": 2048, 00:21:56.372 "data_size": 63488 00:21:56.372 } 00:21:56.372 ] 00:21:56.372 }' 00:21:56.372 12:41:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:56.372 12:41:38 -- common/autotest_common.sh@10 -- # set +x 00:21:56.939 12:41:39 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:56.939 12:41:39 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:21:57.198 [2024-10-01 12:41:39.575036] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:57.198 12:41:39 -- bdev/bdev_raid.sh@430 -- # '[' 43cb35d1-9b37-4139-b16b-7e3d684f3237 '!=' 43cb35d1-9b37-4139-b16b-7e3d684f3237 ']' 00:21:57.198 12:41:39 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:21:57.198 12:41:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:57.198 12:41:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:57.198 12:41:39 -- bdev/bdev_raid.sh@511 -- # killprocess 120942 00:21:57.198 12:41:39 -- common/autotest_common.sh@926 -- # '[' -z 120942 ']' 00:21:57.198 12:41:39 -- common/autotest_common.sh@930 -- # kill -0 120942 00:21:57.198 12:41:39 -- common/autotest_common.sh@931 -- # uname 00:21:57.198 12:41:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:57.198 12:41:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120942 00:21:57.198 12:41:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:57.198 12:41:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:57.198 12:41:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120942' 00:21:57.198 killing process with pid 120942 00:21:57.198 12:41:39 -- common/autotest_common.sh@945 -- # kill 120942 00:21:57.198 [2024-10-01 12:41:39.637247] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:57.198 12:41:39 -- common/autotest_common.sh@950 -- # wait 120942 00:21:57.198 [2024-10-01 12:41:39.637332] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.198 [2024-10-01 12:41:39.637383] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.199 [2024-10-01 12:41:39.637391] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:21:57.458 [2024-10-01 12:41:39.945011] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:58.839 ************************************ 00:21:58.839 END TEST raid_superblock_test 00:21:58.839 ************************************ 00:21:58.839 12:41:40 -- bdev/bdev_raid.sh@513 -- # return 0 00:21:58.839 00:21:58.839 real 0m10.002s 00:21:58.839 user 0m16.545s 00:21:58.839 sys 0m1.679s 00:21:58.839 12:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.839 12:41:40 -- common/autotest_common.sh@10 -- # set +x 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:21:58.839 12:41:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:21:58.839 12:41:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:58.839 12:41:41 -- common/autotest_common.sh@10 -- # set +x 00:21:58.839 ************************************ 00:21:58.839 START TEST raid_state_function_test 00:21:58.839 ************************************ 00:21:58.839 12:41:41 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=121251 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121251' 00:21:58.839 Process raid pid: 121251 00:21:58.839 12:41:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121251 /var/tmp/spdk-raid.sock 00:21:58.839 12:41:41 -- common/autotest_common.sh@819 -- # '[' -z 121251 ']' 00:21:58.839 12:41:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:58.839 12:41:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:58.839 12:41:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:58.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:58.839 12:41:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:58.839 12:41:41 -- common/autotest_common.sh@10 -- # set +x 00:21:58.839 [2024-10-01 12:41:41.177871] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:58.839 [2024-10-01 12:41:41.178272] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.839 [2024-10-01 12:41:41.346325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.099 [2024-10-01 12:41:41.496193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.358 [2024-10-01 12:41:41.647973] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:59.617 12:41:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:59.617 12:41:41 -- common/autotest_common.sh@852 -- # return 0 00:21:59.617 12:41:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:59.617 [2024-10-01 12:41:42.134409] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:59.617 [2024-10-01 12:41:42.134632] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:59.617 [2024-10-01 12:41:42.134711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:59.617 [2024-10-01 12:41:42.134762] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:59.617 [2024-10-01 12:41:42.134788] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:59.617 [2024-10-01 12:41:42.134843] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:59.617 [2024-10-01 12:41:42.134923] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:59.617 [2024-10-01 12:41:42.134971] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.875 "name": "Existed_Raid", 00:21:59.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.875 "strip_size_kb": 0, 00:21:59.875 "state": "configuring", 00:21:59.875 "raid_level": "raid1", 00:21:59.875 "superblock": false, 00:21:59.875 "num_base_bdevs": 4, 00:21:59.875 "num_base_bdevs_discovered": 0, 00:21:59.875 "num_base_bdevs_operational": 4, 00:21:59.875 "base_bdevs_list": [ 00:21:59.875 { 00:21:59.875 "name": "BaseBdev1", 00:21:59.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.875 "is_configured": false, 00:21:59.875 "data_offset": 0, 00:21:59.875 "data_size": 0 00:21:59.875 }, 00:21:59.875 { 00:21:59.875 "name": "BaseBdev2", 00:21:59.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.875 "is_configured": false, 00:21:59.875 "data_offset": 0, 00:21:59.875 "data_size": 0 00:21:59.875 }, 00:21:59.875 { 00:21:59.875 "name": "BaseBdev3", 00:21:59.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.875 "is_configured": false, 00:21:59.875 "data_offset": 0, 00:21:59.875 "data_size": 0 00:21:59.875 }, 00:21:59.875 { 00:21:59.875 "name": "BaseBdev4", 00:21:59.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.875 "is_configured": false, 00:21:59.875 "data_offset": 0, 00:21:59.875 "data_size": 0 00:21:59.875 } 00:21:59.875 ] 00:21:59.875 }' 00:21:59.875 12:41:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.875 12:41:42 -- common/autotest_common.sh@10 -- # set +x 00:22:00.443 12:41:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:00.702 [2024-10-01 12:41:43.001045] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:00.702 [2024-10-01 12:41:43.001166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:22:00.702 12:41:43 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:00.702 [2024-10-01 12:41:43.172792] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:00.702 [2024-10-01 12:41:43.172949] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:00.702 [2024-10-01 12:41:43.173086] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:00.702 [2024-10-01 12:41:43.173144] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:00.702 [2024-10-01 12:41:43.173170] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:00.702 [2024-10-01 12:41:43.173270] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:00.702 [2024-10-01 12:41:43.173301] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:00.702 [2024-10-01 12:41:43.173342] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:00.702 12:41:43 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:00.961 [2024-10-01 12:41:43.379214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:00.961 BaseBdev1 00:22:00.961 12:41:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:00.961 12:41:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:00.961 12:41:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:00.961 12:41:43 -- common/autotest_common.sh@889 -- # local i 00:22:00.961 12:41:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:00.961 12:41:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:00.961 12:41:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:01.221 12:41:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:01.221 [ 00:22:01.221 { 00:22:01.221 "name": "BaseBdev1", 00:22:01.221 "aliases": [ 00:22:01.221 "fa29e6c7-5008-45cc-8982-47f4ee6fcdee" 00:22:01.221 ], 00:22:01.221 "product_name": "Malloc disk", 00:22:01.221 "block_size": 512, 00:22:01.221 "num_blocks": 65536, 00:22:01.221 "uuid": "fa29e6c7-5008-45cc-8982-47f4ee6fcdee", 00:22:01.221 "assigned_rate_limits": { 00:22:01.221 "rw_ios_per_sec": 0, 00:22:01.221 "rw_mbytes_per_sec": 0, 00:22:01.221 "r_mbytes_per_sec": 0, 00:22:01.221 "w_mbytes_per_sec": 0 00:22:01.221 }, 00:22:01.221 "claimed": true, 00:22:01.221 "claim_type": "exclusive_write", 00:22:01.221 "zoned": false, 00:22:01.221 "supported_io_types": { 00:22:01.221 "read": true, 00:22:01.221 "write": true, 00:22:01.221 "unmap": true, 00:22:01.221 "write_zeroes": true, 00:22:01.221 "flush": true, 00:22:01.221 "reset": true, 00:22:01.221 "compare": false, 00:22:01.221 "compare_and_write": false, 00:22:01.221 "abort": true, 00:22:01.221 "nvme_admin": false, 00:22:01.221 "nvme_io": false 00:22:01.221 }, 00:22:01.221 "memory_domains": [ 00:22:01.221 { 00:22:01.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.221 "dma_device_type": 2 00:22:01.221 } 00:22:01.221 ], 00:22:01.221 "driver_specific": {} 00:22:01.221 } 00:22:01.221 ] 00:22:01.221 12:41:43 -- common/autotest_common.sh@895 -- # return 0 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.221 12:41:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.481 12:41:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.481 "name": "Existed_Raid", 00:22:01.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.481 "strip_size_kb": 0, 00:22:01.481 "state": "configuring", 00:22:01.481 "raid_level": "raid1", 00:22:01.481 "superblock": false, 00:22:01.481 "num_base_bdevs": 4, 00:22:01.481 "num_base_bdevs_discovered": 1, 00:22:01.481 "num_base_bdevs_operational": 4, 00:22:01.481 "base_bdevs_list": [ 00:22:01.481 { 00:22:01.481 "name": "BaseBdev1", 00:22:01.481 "uuid": "fa29e6c7-5008-45cc-8982-47f4ee6fcdee", 00:22:01.481 "is_configured": true, 00:22:01.481 "data_offset": 0, 00:22:01.481 "data_size": 65536 00:22:01.481 }, 00:22:01.481 { 00:22:01.481 "name": "BaseBdev2", 00:22:01.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.481 "is_configured": false, 00:22:01.481 "data_offset": 0, 00:22:01.481 "data_size": 0 00:22:01.481 }, 00:22:01.481 { 00:22:01.481 "name": "BaseBdev3", 00:22:01.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.481 "is_configured": false, 00:22:01.481 "data_offset": 0, 00:22:01.481 "data_size": 0 00:22:01.481 }, 00:22:01.481 { 00:22:01.481 "name": "BaseBdev4", 00:22:01.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.481 "is_configured": false, 00:22:01.481 "data_offset": 0, 00:22:01.481 "data_size": 0 00:22:01.481 } 00:22:01.481 ] 00:22:01.481 }' 00:22:01.481 12:41:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.481 12:41:43 -- common/autotest_common.sh@10 -- # set +x 00:22:02.070 12:41:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:02.339 [2024-10-01 12:41:44.605417] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:02.339 [2024-10-01 12:41:44.605563] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:02.339 [2024-10-01 12:41:44.797169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.339 [2024-10-01 12:41:44.799100] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:02.339 [2024-10-01 12:41:44.799276] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:02.339 [2024-10-01 12:41:44.799366] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:02.339 [2024-10-01 12:41:44.799439] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:02.339 [2024-10-01 12:41:44.799466] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:02.339 [2024-10-01 12:41:44.799501] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.339 12:41:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.598 12:41:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:02.598 "name": "Existed_Raid", 00:22:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.598 "strip_size_kb": 0, 00:22:02.598 "state": "configuring", 00:22:02.598 "raid_level": "raid1", 00:22:02.598 "superblock": false, 00:22:02.598 "num_base_bdevs": 4, 00:22:02.598 "num_base_bdevs_discovered": 1, 00:22:02.598 "num_base_bdevs_operational": 4, 00:22:02.598 "base_bdevs_list": [ 00:22:02.598 { 00:22:02.598 "name": "BaseBdev1", 00:22:02.598 "uuid": "fa29e6c7-5008-45cc-8982-47f4ee6fcdee", 00:22:02.598 "is_configured": true, 00:22:02.598 "data_offset": 0, 00:22:02.598 "data_size": 65536 00:22:02.598 }, 00:22:02.598 { 00:22:02.598 "name": "BaseBdev2", 00:22:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.598 "is_configured": false, 00:22:02.598 "data_offset": 0, 00:22:02.598 "data_size": 0 00:22:02.598 }, 00:22:02.598 { 00:22:02.598 "name": "BaseBdev3", 00:22:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.598 "is_configured": false, 00:22:02.598 "data_offset": 0, 00:22:02.598 "data_size": 0 00:22:02.598 }, 00:22:02.598 { 00:22:02.598 "name": "BaseBdev4", 00:22:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.598 "is_configured": false, 00:22:02.598 "data_offset": 0, 00:22:02.598 "data_size": 0 00:22:02.598 } 00:22:02.598 ] 00:22:02.598 }' 00:22:02.598 12:41:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:02.598 12:41:44 -- common/autotest_common.sh@10 -- # set +x 00:22:03.167 12:41:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:03.167 [2024-10-01 12:41:45.696288] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.167 BaseBdev2 00:22:03.426 12:41:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:03.426 12:41:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:22:03.426 12:41:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:03.426 12:41:45 -- common/autotest_common.sh@889 -- # local i 00:22:03.426 12:41:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:03.426 12:41:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:03.426 12:41:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:03.426 12:41:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:03.684 [ 00:22:03.684 { 00:22:03.684 "name": "BaseBdev2", 00:22:03.684 "aliases": [ 00:22:03.684 "9773521e-9a85-41d1-a52d-1a5e5f81a2bd" 00:22:03.685 ], 00:22:03.685 "product_name": "Malloc disk", 00:22:03.685 "block_size": 512, 00:22:03.685 "num_blocks": 65536, 00:22:03.685 "uuid": "9773521e-9a85-41d1-a52d-1a5e5f81a2bd", 00:22:03.685 "assigned_rate_limits": { 00:22:03.685 "rw_ios_per_sec": 0, 00:22:03.685 "rw_mbytes_per_sec": 0, 00:22:03.685 "r_mbytes_per_sec": 0, 00:22:03.685 "w_mbytes_per_sec": 0 00:22:03.685 }, 00:22:03.685 "claimed": true, 00:22:03.685 "claim_type": "exclusive_write", 00:22:03.685 "zoned": false, 00:22:03.685 "supported_io_types": { 00:22:03.685 "read": true, 00:22:03.685 "write": true, 00:22:03.685 "unmap": true, 00:22:03.685 "write_zeroes": true, 00:22:03.685 "flush": true, 00:22:03.685 "reset": true, 00:22:03.685 "compare": false, 00:22:03.685 "compare_and_write": false, 00:22:03.685 "abort": true, 00:22:03.685 "nvme_admin": false, 00:22:03.685 "nvme_io": false 00:22:03.685 }, 00:22:03.685 "memory_domains": [ 00:22:03.685 { 00:22:03.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.685 "dma_device_type": 2 00:22:03.685 } 00:22:03.685 ], 00:22:03.685 "driver_specific": {} 00:22:03.685 } 00:22:03.685 ] 00:22:03.685 12:41:46 -- common/autotest_common.sh@895 -- # return 0 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.685 12:41:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.943 12:41:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.943 "name": "Existed_Raid", 00:22:03.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.943 "strip_size_kb": 0, 00:22:03.943 "state": "configuring", 00:22:03.943 "raid_level": "raid1", 00:22:03.943 "superblock": false, 00:22:03.943 "num_base_bdevs": 4, 00:22:03.943 "num_base_bdevs_discovered": 2, 00:22:03.943 "num_base_bdevs_operational": 4, 00:22:03.943 "base_bdevs_list": [ 00:22:03.943 { 00:22:03.943 "name": "BaseBdev1", 00:22:03.943 "uuid": "fa29e6c7-5008-45cc-8982-47f4ee6fcdee", 00:22:03.943 "is_configured": true, 00:22:03.943 "data_offset": 0, 00:22:03.943 "data_size": 65536 00:22:03.943 }, 00:22:03.943 { 00:22:03.943 "name": "BaseBdev2", 00:22:03.943 "uuid": "9773521e-9a85-41d1-a52d-1a5e5f81a2bd", 00:22:03.943 "is_configured": true, 00:22:03.943 "data_offset": 0, 00:22:03.943 "data_size": 65536 00:22:03.943 }, 00:22:03.943 { 00:22:03.943 "name": "BaseBdev3", 00:22:03.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.943 "is_configured": false, 00:22:03.943 "data_offset": 0, 00:22:03.943 "data_size": 0 00:22:03.943 }, 00:22:03.943 { 00:22:03.943 "name": "BaseBdev4", 00:22:03.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.943 "is_configured": false, 00:22:03.943 "data_offset": 0, 00:22:03.943 "data_size": 0 00:22:03.943 } 00:22:03.943 ] 00:22:03.943 }' 00:22:03.943 12:41:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.943 12:41:46 -- common/autotest_common.sh@10 -- # set +x 00:22:04.511 12:41:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:04.511 [2024-10-01 12:41:47.007205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:04.511 BaseBdev3 00:22:04.511 12:41:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:04.511 12:41:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:22:04.511 12:41:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:04.511 12:41:47 -- common/autotest_common.sh@889 -- # local i 00:22:04.511 12:41:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:04.511 12:41:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:04.511 12:41:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:04.769 12:41:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:05.027 [ 00:22:05.027 { 00:22:05.027 "name": "BaseBdev3", 00:22:05.027 "aliases": [ 00:22:05.027 "82cac7d9-5a02-43b4-9354-f879ff785d55" 00:22:05.027 ], 00:22:05.027 "product_name": "Malloc disk", 00:22:05.027 "block_size": 512, 00:22:05.027 "num_blocks": 65536, 00:22:05.027 "uuid": "82cac7d9-5a02-43b4-9354-f879ff785d55", 00:22:05.027 "assigned_rate_limits": { 00:22:05.027 "rw_ios_per_sec": 0, 00:22:05.027 "rw_mbytes_per_sec": 0, 00:22:05.027 "r_mbytes_per_sec": 0, 00:22:05.027 "w_mbytes_per_sec": 0 00:22:05.027 }, 00:22:05.027 "claimed": true, 00:22:05.027 "claim_type": "exclusive_write", 00:22:05.028 "zoned": false, 00:22:05.028 "supported_io_types": { 00:22:05.028 "read": true, 00:22:05.028 "write": true, 00:22:05.028 "unmap": true, 00:22:05.028 "write_zeroes": true, 00:22:05.028 "flush": true, 00:22:05.028 "reset": true, 00:22:05.028 "compare": false, 00:22:05.028 "compare_and_write": false, 00:22:05.028 "abort": true, 00:22:05.028 "nvme_admin": false, 00:22:05.028 "nvme_io": false 00:22:05.028 }, 00:22:05.028 "memory_domains": [ 00:22:05.028 { 00:22:05.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.028 "dma_device_type": 2 00:22:05.028 } 00:22:05.028 ], 00:22:05.028 "driver_specific": {} 00:22:05.028 } 00:22:05.028 ] 00:22:05.028 12:41:47 -- common/autotest_common.sh@895 -- # return 0 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.028 12:41:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.286 12:41:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.286 "name": "Existed_Raid", 00:22:05.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.286 "strip_size_kb": 0, 00:22:05.286 "state": "configuring", 00:22:05.286 "raid_level": "raid1", 00:22:05.286 "superblock": false, 00:22:05.286 "num_base_bdevs": 4, 00:22:05.286 "num_base_bdevs_discovered": 3, 00:22:05.286 "num_base_bdevs_operational": 4, 00:22:05.286 "base_bdevs_list": [ 00:22:05.286 { 00:22:05.286 "name": "BaseBdev1", 00:22:05.286 "uuid": "fa29e6c7-5008-45cc-8982-47f4ee6fcdee", 00:22:05.286 "is_configured": true, 00:22:05.286 "data_offset": 0, 00:22:05.286 "data_size": 65536 00:22:05.286 }, 00:22:05.286 { 00:22:05.286 "name": "BaseBdev2", 00:22:05.286 "uuid": "9773521e-9a85-41d1-a52d-1a5e5f81a2bd", 00:22:05.286 "is_configured": true, 00:22:05.286 "data_offset": 0, 00:22:05.286 "data_size": 65536 00:22:05.286 }, 00:22:05.286 { 00:22:05.286 "name": "BaseBdev3", 00:22:05.286 "uuid": "82cac7d9-5a02-43b4-9354-f879ff785d55", 00:22:05.286 "is_configured": true, 00:22:05.286 "data_offset": 0, 00:22:05.286 "data_size": 65536 00:22:05.286 }, 00:22:05.286 { 00:22:05.286 "name": "BaseBdev4", 00:22:05.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.286 "is_configured": false, 00:22:05.286 "data_offset": 0, 00:22:05.286 "data_size": 0 00:22:05.286 } 00:22:05.286 ] 00:22:05.286 }' 00:22:05.286 12:41:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.286 12:41:47 -- common/autotest_common.sh@10 -- # set +x 00:22:05.855 12:41:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:05.855 [2024-10-01 12:41:48.377971] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:05.855 [2024-10-01 12:41:48.378186] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:22:05.855 [2024-10-01 12:41:48.378225] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:05.855 [2024-10-01 12:41:48.378436] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:22:05.855 [2024-10-01 12:41:48.378853] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:22:05.855 [2024-10-01 12:41:48.378959] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:22:05.855 [2024-10-01 12:41:48.379265] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.855 BaseBdev4 00:22:06.114 12:41:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:22:06.114 12:41:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:22:06.114 12:41:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:06.114 12:41:48 -- common/autotest_common.sh@889 -- # local i 00:22:06.114 12:41:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:06.114 12:41:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:06.114 12:41:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:06.114 12:41:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:06.373 [ 00:22:06.373 { 00:22:06.373 "name": "BaseBdev4", 00:22:06.373 "aliases": [ 00:22:06.373 "01e58484-8b5e-4c67-8724-ac3b2d9d7106" 00:22:06.373 ], 00:22:06.373 "product_name": "Malloc disk", 00:22:06.373 "block_size": 512, 00:22:06.373 "num_blocks": 65536, 00:22:06.373 "uuid": "01e58484-8b5e-4c67-8724-ac3b2d9d7106", 00:22:06.373 "assigned_rate_limits": { 00:22:06.373 "rw_ios_per_sec": 0, 00:22:06.373 "rw_mbytes_per_sec": 0, 00:22:06.373 "r_mbytes_per_sec": 0, 00:22:06.373 "w_mbytes_per_sec": 0 00:22:06.373 }, 00:22:06.373 "claimed": true, 00:22:06.373 "claim_type": "exclusive_write", 00:22:06.373 "zoned": false, 00:22:06.373 "supported_io_types": { 00:22:06.373 "read": true, 00:22:06.373 "write": true, 00:22:06.373 "unmap": true, 00:22:06.373 "write_zeroes": true, 00:22:06.373 "flush": true, 00:22:06.373 "reset": true, 00:22:06.373 "compare": false, 00:22:06.373 "compare_and_write": false, 00:22:06.373 "abort": true, 00:22:06.373 "nvme_admin": false, 00:22:06.373 "nvme_io": false 00:22:06.373 }, 00:22:06.373 "memory_domains": [ 00:22:06.373 { 00:22:06.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.373 "dma_device_type": 2 00:22:06.373 } 00:22:06.373 ], 00:22:06.373 "driver_specific": {} 00:22:06.373 } 00:22:06.373 ] 00:22:06.373 12:41:48 -- common/autotest_common.sh@895 -- # return 0 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.373 12:41:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.632 12:41:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.632 "name": "Existed_Raid", 00:22:06.632 "uuid": "94e740e6-20fb-4e74-a428-b42998994865", 00:22:06.632 "strip_size_kb": 0, 00:22:06.632 "state": "online", 00:22:06.632 "raid_level": "raid1", 00:22:06.632 "superblock": false, 00:22:06.632 "num_base_bdevs": 4, 00:22:06.632 "num_base_bdevs_discovered": 4, 00:22:06.632 "num_base_bdevs_operational": 4, 00:22:06.632 "base_bdevs_list": [ 00:22:06.632 { 00:22:06.632 "name": "BaseBdev1", 00:22:06.632 "uuid": "fa29e6c7-5008-45cc-8982-47f4ee6fcdee", 00:22:06.632 "is_configured": true, 00:22:06.632 "data_offset": 0, 00:22:06.632 "data_size": 65536 00:22:06.632 }, 00:22:06.632 { 00:22:06.632 "name": "BaseBdev2", 00:22:06.632 "uuid": "9773521e-9a85-41d1-a52d-1a5e5f81a2bd", 00:22:06.632 "is_configured": true, 00:22:06.632 "data_offset": 0, 00:22:06.632 "data_size": 65536 00:22:06.632 }, 00:22:06.632 { 00:22:06.632 "name": "BaseBdev3", 00:22:06.632 "uuid": "82cac7d9-5a02-43b4-9354-f879ff785d55", 00:22:06.632 "is_configured": true, 00:22:06.632 "data_offset": 0, 00:22:06.632 "data_size": 65536 00:22:06.632 }, 00:22:06.632 { 00:22:06.632 "name": "BaseBdev4", 00:22:06.632 "uuid": "01e58484-8b5e-4c67-8724-ac3b2d9d7106", 00:22:06.632 "is_configured": true, 00:22:06.632 "data_offset": 0, 00:22:06.632 "data_size": 65536 00:22:06.632 } 00:22:06.632 ] 00:22:06.632 }' 00:22:06.632 12:41:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.632 12:41:48 -- common/autotest_common.sh@10 -- # set +x 00:22:07.198 12:41:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:07.198 [2024-10-01 12:41:49.656184] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.455 12:41:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:07.455 "name": "Existed_Raid", 00:22:07.455 "uuid": "94e740e6-20fb-4e74-a428-b42998994865", 00:22:07.455 "strip_size_kb": 0, 00:22:07.455 "state": "online", 00:22:07.455 "raid_level": "raid1", 00:22:07.455 "superblock": false, 00:22:07.455 "num_base_bdevs": 4, 00:22:07.455 "num_base_bdevs_discovered": 3, 00:22:07.455 "num_base_bdevs_operational": 3, 00:22:07.455 "base_bdevs_list": [ 00:22:07.455 { 00:22:07.455 "name": null, 00:22:07.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.455 "is_configured": false, 00:22:07.455 "data_offset": 0, 00:22:07.455 "data_size": 65536 00:22:07.455 }, 00:22:07.455 { 00:22:07.455 "name": "BaseBdev2", 00:22:07.455 "uuid": "9773521e-9a85-41d1-a52d-1a5e5f81a2bd", 00:22:07.455 "is_configured": true, 00:22:07.455 "data_offset": 0, 00:22:07.455 "data_size": 65536 00:22:07.455 }, 00:22:07.455 { 00:22:07.455 "name": "BaseBdev3", 00:22:07.455 "uuid": "82cac7d9-5a02-43b4-9354-f879ff785d55", 00:22:07.456 "is_configured": true, 00:22:07.456 "data_offset": 0, 00:22:07.456 "data_size": 65536 00:22:07.456 }, 00:22:07.456 { 00:22:07.456 "name": "BaseBdev4", 00:22:07.456 "uuid": "01e58484-8b5e-4c67-8724-ac3b2d9d7106", 00:22:07.456 "is_configured": true, 00:22:07.456 "data_offset": 0, 00:22:07.456 "data_size": 65536 00:22:07.456 } 00:22:07.456 ] 00:22:07.456 }' 00:22:07.456 12:41:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:07.456 12:41:49 -- common/autotest_common.sh@10 -- # set +x 00:22:08.021 12:41:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:08.021 12:41:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:08.021 12:41:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.021 12:41:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:08.279 12:41:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:08.279 12:41:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.279 12:41:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:08.537 [2024-10-01 12:41:50.853385] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:08.537 12:41:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:08.537 12:41:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:08.537 12:41:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.537 12:41:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:08.796 12:41:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:08.796 12:41:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.796 12:41:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:08.796 [2024-10-01 12:41:51.308452] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:09.055 12:41:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:09.055 12:41:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:09.056 12:41:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.056 12:41:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:09.316 12:41:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:09.316 12:41:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:09.316 12:41:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:09.316 [2024-10-01 12:41:51.774130] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:09.316 [2024-10-01 12:41:51.774361] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:09.316 [2024-10-01 12:41:51.774691] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.576 [2024-10-01 12:41:51.861526] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:09.576 [2024-10-01 12:41:51.861755] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:22:09.576 12:41:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:09.576 12:41:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:09.576 12:41:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.576 12:41:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:09.576 12:41:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:09.576 12:41:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:09.576 12:41:52 -- bdev/bdev_raid.sh@287 -- # killprocess 121251 00:22:09.576 12:41:52 -- common/autotest_common.sh@926 -- # '[' -z 121251 ']' 00:22:09.576 12:41:52 -- common/autotest_common.sh@930 -- # kill -0 121251 00:22:09.576 12:41:52 -- common/autotest_common.sh@931 -- # uname 00:22:09.576 12:41:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:09.576 12:41:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121251 00:22:09.576 12:41:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:09.576 12:41:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:09.576 12:41:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121251' 00:22:09.576 killing process with pid 121251 00:22:09.576 12:41:52 -- common/autotest_common.sh@945 -- # kill 121251 00:22:09.576 [2024-10-01 12:41:52.082183] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:09.576 12:41:52 -- common/autotest_common.sh@950 -- # wait 121251 00:22:09.576 [2024-10-01 12:41:52.082678] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:10.957 ************************************ 00:22:10.957 END TEST raid_state_function_test 00:22:10.957 ************************************ 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:10.957 00:22:10.957 real 0m12.189s 00:22:10.957 user 0m20.720s 00:22:10.957 sys 0m1.970s 00:22:10.957 12:41:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:10.957 12:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:22:10.957 12:41:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:10.957 12:41:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:10.957 12:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:10.957 ************************************ 00:22:10.957 START TEST raid_state_function_test_sb 00:22:10.957 ************************************ 00:22:10.957 12:41:53 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=121670 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:10.957 Process raid pid: 121670 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121670' 00:22:10.957 12:41:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121670 /var/tmp/spdk-raid.sock 00:22:10.957 12:41:53 -- common/autotest_common.sh@819 -- # '[' -z 121670 ']' 00:22:10.957 12:41:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:10.957 12:41:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:10.957 12:41:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:10.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:10.957 12:41:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:10.957 12:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:10.957 [2024-10-01 12:41:53.468010] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:10.957 [2024-10-01 12:41:53.468320] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.217 [2024-10-01 12:41:53.639159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.476 [2024-10-01 12:41:53.788774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.476 [2024-10-01 12:41:53.937628] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:12.044 12:41:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:12.044 12:41:54 -- common/autotest_common.sh@852 -- # return 0 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:12.044 [2024-10-01 12:41:54.435323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:12.044 [2024-10-01 12:41:54.435517] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:12.044 [2024-10-01 12:41:54.435664] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:12.044 [2024-10-01 12:41:54.435721] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:12.044 [2024-10-01 12:41:54.435747] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:12.044 [2024-10-01 12:41:54.435802] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:12.044 [2024-10-01 12:41:54.435905] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:12.044 [2024-10-01 12:41:54.435956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.044 12:41:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.303 12:41:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:12.303 "name": "Existed_Raid", 00:22:12.303 "uuid": "31677571-dd4c-4c00-9ac3-ff1242e2cfdc", 00:22:12.303 "strip_size_kb": 0, 00:22:12.303 "state": "configuring", 00:22:12.303 "raid_level": "raid1", 00:22:12.303 "superblock": true, 00:22:12.303 "num_base_bdevs": 4, 00:22:12.303 "num_base_bdevs_discovered": 0, 00:22:12.303 "num_base_bdevs_operational": 4, 00:22:12.303 "base_bdevs_list": [ 00:22:12.303 { 00:22:12.303 "name": "BaseBdev1", 00:22:12.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.303 "is_configured": false, 00:22:12.303 "data_offset": 0, 00:22:12.303 "data_size": 0 00:22:12.303 }, 00:22:12.303 { 00:22:12.303 "name": "BaseBdev2", 00:22:12.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.303 "is_configured": false, 00:22:12.303 "data_offset": 0, 00:22:12.303 "data_size": 0 00:22:12.303 }, 00:22:12.303 { 00:22:12.303 "name": "BaseBdev3", 00:22:12.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.303 "is_configured": false, 00:22:12.303 "data_offset": 0, 00:22:12.303 "data_size": 0 00:22:12.303 }, 00:22:12.303 { 00:22:12.303 "name": "BaseBdev4", 00:22:12.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.303 "is_configured": false, 00:22:12.303 "data_offset": 0, 00:22:12.303 "data_size": 0 00:22:12.303 } 00:22:12.303 ] 00:22:12.303 }' 00:22:12.303 12:41:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:12.303 12:41:54 -- common/autotest_common.sh@10 -- # set +x 00:22:12.871 12:41:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:12.871 [2024-10-01 12:41:55.349845] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:12.871 [2024-10-01 12:41:55.349971] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:22:12.871 12:41:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:13.130 [2024-10-01 12:41:55.529636] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:13.130 [2024-10-01 12:41:55.529811] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:13.130 [2024-10-01 12:41:55.529920] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:13.131 [2024-10-01 12:41:55.529976] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:13.131 [2024-10-01 12:41:55.530002] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:13.131 [2024-10-01 12:41:55.530054] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:13.131 [2024-10-01 12:41:55.530287] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:13.131 [2024-10-01 12:41:55.530343] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:13.131 12:41:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:13.390 [2024-10-01 12:41:55.731889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:13.390 BaseBdev1 00:22:13.390 12:41:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:13.390 12:41:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:13.390 12:41:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:13.390 12:41:55 -- common/autotest_common.sh@889 -- # local i 00:22:13.390 12:41:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:13.390 12:41:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:13.390 12:41:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:13.650 12:41:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:13.650 [ 00:22:13.650 { 00:22:13.650 "name": "BaseBdev1", 00:22:13.650 "aliases": [ 00:22:13.650 "cbd7feb9-3959-422e-bd38-fec147b1e570" 00:22:13.650 ], 00:22:13.650 "product_name": "Malloc disk", 00:22:13.650 "block_size": 512, 00:22:13.650 "num_blocks": 65536, 00:22:13.650 "uuid": "cbd7feb9-3959-422e-bd38-fec147b1e570", 00:22:13.650 "assigned_rate_limits": { 00:22:13.650 "rw_ios_per_sec": 0, 00:22:13.650 "rw_mbytes_per_sec": 0, 00:22:13.650 "r_mbytes_per_sec": 0, 00:22:13.650 "w_mbytes_per_sec": 0 00:22:13.650 }, 00:22:13.650 "claimed": true, 00:22:13.650 "claim_type": "exclusive_write", 00:22:13.650 "zoned": false, 00:22:13.650 "supported_io_types": { 00:22:13.650 "read": true, 00:22:13.650 "write": true, 00:22:13.650 "unmap": true, 00:22:13.650 "write_zeroes": true, 00:22:13.650 "flush": true, 00:22:13.650 "reset": true, 00:22:13.650 "compare": false, 00:22:13.650 "compare_and_write": false, 00:22:13.650 "abort": true, 00:22:13.650 "nvme_admin": false, 00:22:13.650 "nvme_io": false 00:22:13.650 }, 00:22:13.650 "memory_domains": [ 00:22:13.650 { 00:22:13.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.650 "dma_device_type": 2 00:22:13.650 } 00:22:13.650 ], 00:22:13.650 "driver_specific": {} 00:22:13.650 } 00:22:13.650 ] 00:22:13.650 12:41:56 -- common/autotest_common.sh@895 -- # return 0 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.650 12:41:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.910 12:41:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:13.910 "name": "Existed_Raid", 00:22:13.910 "uuid": "88cfaf3d-5c4e-49b9-be28-2a85e9059ab9", 00:22:13.910 "strip_size_kb": 0, 00:22:13.910 "state": "configuring", 00:22:13.910 "raid_level": "raid1", 00:22:13.910 "superblock": true, 00:22:13.910 "num_base_bdevs": 4, 00:22:13.910 "num_base_bdevs_discovered": 1, 00:22:13.910 "num_base_bdevs_operational": 4, 00:22:13.910 "base_bdevs_list": [ 00:22:13.910 { 00:22:13.910 "name": "BaseBdev1", 00:22:13.910 "uuid": "cbd7feb9-3959-422e-bd38-fec147b1e570", 00:22:13.910 "is_configured": true, 00:22:13.910 "data_offset": 2048, 00:22:13.910 "data_size": 63488 00:22:13.910 }, 00:22:13.910 { 00:22:13.910 "name": "BaseBdev2", 00:22:13.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.910 "is_configured": false, 00:22:13.910 "data_offset": 0, 00:22:13.910 "data_size": 0 00:22:13.910 }, 00:22:13.910 { 00:22:13.910 "name": "BaseBdev3", 00:22:13.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.910 "is_configured": false, 00:22:13.910 "data_offset": 0, 00:22:13.910 "data_size": 0 00:22:13.910 }, 00:22:13.910 { 00:22:13.910 "name": "BaseBdev4", 00:22:13.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.910 "is_configured": false, 00:22:13.910 "data_offset": 0, 00:22:13.910 "data_size": 0 00:22:13.910 } 00:22:13.910 ] 00:22:13.910 }' 00:22:13.910 12:41:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:13.910 12:41:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.479 12:41:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:14.479 [2024-10-01 12:41:56.998047] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:14.479 [2024-10-01 12:41:56.998180] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:22:14.739 12:41:57 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:22:14.739 12:41:57 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:14.998 12:41:57 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:14.998 BaseBdev1 00:22:14.998 12:41:57 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:22:14.998 12:41:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:14.998 12:41:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:14.998 12:41:57 -- common/autotest_common.sh@889 -- # local i 00:22:14.998 12:41:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:14.998 12:41:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:14.998 12:41:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:15.258 12:41:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:15.518 [ 00:22:15.518 { 00:22:15.518 "name": "BaseBdev1", 00:22:15.518 "aliases": [ 00:22:15.518 "abb48aae-43fc-45f3-902c-f09bb5d7a693" 00:22:15.518 ], 00:22:15.518 "product_name": "Malloc disk", 00:22:15.518 "block_size": 512, 00:22:15.518 "num_blocks": 65536, 00:22:15.518 "uuid": "abb48aae-43fc-45f3-902c-f09bb5d7a693", 00:22:15.518 "assigned_rate_limits": { 00:22:15.518 "rw_ios_per_sec": 0, 00:22:15.518 "rw_mbytes_per_sec": 0, 00:22:15.518 "r_mbytes_per_sec": 0, 00:22:15.518 "w_mbytes_per_sec": 0 00:22:15.518 }, 00:22:15.518 "claimed": false, 00:22:15.518 "zoned": false, 00:22:15.518 "supported_io_types": { 00:22:15.518 "read": true, 00:22:15.518 "write": true, 00:22:15.518 "unmap": true, 00:22:15.518 "write_zeroes": true, 00:22:15.518 "flush": true, 00:22:15.518 "reset": true, 00:22:15.518 "compare": false, 00:22:15.518 "compare_and_write": false, 00:22:15.518 "abort": true, 00:22:15.518 "nvme_admin": false, 00:22:15.518 "nvme_io": false 00:22:15.518 }, 00:22:15.518 "memory_domains": [ 00:22:15.518 { 00:22:15.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.518 "dma_device_type": 2 00:22:15.518 } 00:22:15.518 ], 00:22:15.518 "driver_specific": {} 00:22:15.518 } 00:22:15.518 ] 00:22:15.518 12:41:57 -- common/autotest_common.sh@895 -- # return 0 00:22:15.518 12:41:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:15.518 [2024-10-01 12:41:58.023659] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:15.518 [2024-10-01 12:41:58.025641] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:15.518 [2024-10-01 12:41:58.025820] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:15.518 [2024-10-01 12:41:58.025895] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:15.518 [2024-10-01 12:41:58.025948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:15.518 [2024-10-01 12:41:58.025974] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:15.518 [2024-10-01 12:41:58.026007] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.518 12:41:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:15.778 12:41:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:15.778 "name": "Existed_Raid", 00:22:15.778 "uuid": "2131f9f2-1e7c-4312-80f9-c0e86ff541f2", 00:22:15.778 "strip_size_kb": 0, 00:22:15.778 "state": "configuring", 00:22:15.778 "raid_level": "raid1", 00:22:15.778 "superblock": true, 00:22:15.778 "num_base_bdevs": 4, 00:22:15.778 "num_base_bdevs_discovered": 1, 00:22:15.778 "num_base_bdevs_operational": 4, 00:22:15.778 "base_bdevs_list": [ 00:22:15.778 { 00:22:15.778 "name": "BaseBdev1", 00:22:15.778 "uuid": "abb48aae-43fc-45f3-902c-f09bb5d7a693", 00:22:15.778 "is_configured": true, 00:22:15.778 "data_offset": 2048, 00:22:15.778 "data_size": 63488 00:22:15.778 }, 00:22:15.778 { 00:22:15.778 "name": "BaseBdev2", 00:22:15.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.778 "is_configured": false, 00:22:15.778 "data_offset": 0, 00:22:15.778 "data_size": 0 00:22:15.778 }, 00:22:15.778 { 00:22:15.778 "name": "BaseBdev3", 00:22:15.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.778 "is_configured": false, 00:22:15.778 "data_offset": 0, 00:22:15.778 "data_size": 0 00:22:15.778 }, 00:22:15.778 { 00:22:15.778 "name": "BaseBdev4", 00:22:15.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.778 "is_configured": false, 00:22:15.778 "data_offset": 0, 00:22:15.778 "data_size": 0 00:22:15.778 } 00:22:15.778 ] 00:22:15.778 }' 00:22:15.778 12:41:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:15.778 12:41:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.346 12:41:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:16.605 [2024-10-01 12:41:58.956513] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:16.605 BaseBdev2 00:22:16.605 12:41:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:16.605 12:41:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:22:16.605 12:41:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:16.605 12:41:58 -- common/autotest_common.sh@889 -- # local i 00:22:16.605 12:41:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:16.605 12:41:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:16.605 12:41:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:16.864 12:41:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:16.864 [ 00:22:16.864 { 00:22:16.864 "name": "BaseBdev2", 00:22:16.864 "aliases": [ 00:22:16.864 "97706ba6-54d9-48db-8f3a-4231c14f4b95" 00:22:16.864 ], 00:22:16.864 "product_name": "Malloc disk", 00:22:16.864 "block_size": 512, 00:22:16.864 "num_blocks": 65536, 00:22:16.864 "uuid": "97706ba6-54d9-48db-8f3a-4231c14f4b95", 00:22:16.864 "assigned_rate_limits": { 00:22:16.864 "rw_ios_per_sec": 0, 00:22:16.864 "rw_mbytes_per_sec": 0, 00:22:16.864 "r_mbytes_per_sec": 0, 00:22:16.864 "w_mbytes_per_sec": 0 00:22:16.864 }, 00:22:16.864 "claimed": true, 00:22:16.864 "claim_type": "exclusive_write", 00:22:16.864 "zoned": false, 00:22:16.864 "supported_io_types": { 00:22:16.864 "read": true, 00:22:16.864 "write": true, 00:22:16.864 "unmap": true, 00:22:16.864 "write_zeroes": true, 00:22:16.864 "flush": true, 00:22:16.864 "reset": true, 00:22:16.864 "compare": false, 00:22:16.864 "compare_and_write": false, 00:22:16.864 "abort": true, 00:22:16.864 "nvme_admin": false, 00:22:16.864 "nvme_io": false 00:22:16.864 }, 00:22:16.864 "memory_domains": [ 00:22:16.864 { 00:22:16.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.864 "dma_device_type": 2 00:22:16.864 } 00:22:16.864 ], 00:22:16.864 "driver_specific": {} 00:22:16.864 } 00:22:16.864 ] 00:22:16.864 12:41:59 -- common/autotest_common.sh@895 -- # return 0 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.864 12:41:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.123 12:41:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:17.123 "name": "Existed_Raid", 00:22:17.123 "uuid": "2131f9f2-1e7c-4312-80f9-c0e86ff541f2", 00:22:17.123 "strip_size_kb": 0, 00:22:17.123 "state": "configuring", 00:22:17.123 "raid_level": "raid1", 00:22:17.123 "superblock": true, 00:22:17.123 "num_base_bdevs": 4, 00:22:17.123 "num_base_bdevs_discovered": 2, 00:22:17.123 "num_base_bdevs_operational": 4, 00:22:17.123 "base_bdevs_list": [ 00:22:17.123 { 00:22:17.123 "name": "BaseBdev1", 00:22:17.123 "uuid": "abb48aae-43fc-45f3-902c-f09bb5d7a693", 00:22:17.123 "is_configured": true, 00:22:17.123 "data_offset": 2048, 00:22:17.123 "data_size": 63488 00:22:17.123 }, 00:22:17.123 { 00:22:17.123 "name": "BaseBdev2", 00:22:17.123 "uuid": "97706ba6-54d9-48db-8f3a-4231c14f4b95", 00:22:17.123 "is_configured": true, 00:22:17.123 "data_offset": 2048, 00:22:17.123 "data_size": 63488 00:22:17.123 }, 00:22:17.123 { 00:22:17.123 "name": "BaseBdev3", 00:22:17.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.123 "is_configured": false, 00:22:17.123 "data_offset": 0, 00:22:17.123 "data_size": 0 00:22:17.123 }, 00:22:17.123 { 00:22:17.123 "name": "BaseBdev4", 00:22:17.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.123 "is_configured": false, 00:22:17.123 "data_offset": 0, 00:22:17.123 "data_size": 0 00:22:17.123 } 00:22:17.123 ] 00:22:17.123 }' 00:22:17.123 12:41:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:17.123 12:41:59 -- common/autotest_common.sh@10 -- # set +x 00:22:17.692 12:42:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:17.950 [2024-10-01 12:42:00.240494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:17.950 BaseBdev3 00:22:17.950 12:42:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:17.951 12:42:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:22:17.951 12:42:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:17.951 12:42:00 -- common/autotest_common.sh@889 -- # local i 00:22:17.951 12:42:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:17.951 12:42:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:17.951 12:42:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:17.951 12:42:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:18.209 [ 00:22:18.209 { 00:22:18.209 "name": "BaseBdev3", 00:22:18.209 "aliases": [ 00:22:18.210 "401d7181-a5c1-4102-a2dd-2f1f89438765" 00:22:18.210 ], 00:22:18.210 "product_name": "Malloc disk", 00:22:18.210 "block_size": 512, 00:22:18.210 "num_blocks": 65536, 00:22:18.210 "uuid": "401d7181-a5c1-4102-a2dd-2f1f89438765", 00:22:18.210 "assigned_rate_limits": { 00:22:18.210 "rw_ios_per_sec": 0, 00:22:18.210 "rw_mbytes_per_sec": 0, 00:22:18.210 "r_mbytes_per_sec": 0, 00:22:18.210 "w_mbytes_per_sec": 0 00:22:18.210 }, 00:22:18.210 "claimed": true, 00:22:18.210 "claim_type": "exclusive_write", 00:22:18.210 "zoned": false, 00:22:18.210 "supported_io_types": { 00:22:18.210 "read": true, 00:22:18.210 "write": true, 00:22:18.210 "unmap": true, 00:22:18.210 "write_zeroes": true, 00:22:18.210 "flush": true, 00:22:18.210 "reset": true, 00:22:18.210 "compare": false, 00:22:18.210 "compare_and_write": false, 00:22:18.210 "abort": true, 00:22:18.210 "nvme_admin": false, 00:22:18.210 "nvme_io": false 00:22:18.210 }, 00:22:18.210 "memory_domains": [ 00:22:18.210 { 00:22:18.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.210 "dma_device_type": 2 00:22:18.210 } 00:22:18.210 ], 00:22:18.210 "driver_specific": {} 00:22:18.210 } 00:22:18.210 ] 00:22:18.210 12:42:00 -- common/autotest_common.sh@895 -- # return 0 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.210 12:42:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.469 12:42:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.469 "name": "Existed_Raid", 00:22:18.469 "uuid": "2131f9f2-1e7c-4312-80f9-c0e86ff541f2", 00:22:18.469 "strip_size_kb": 0, 00:22:18.469 "state": "configuring", 00:22:18.469 "raid_level": "raid1", 00:22:18.469 "superblock": true, 00:22:18.469 "num_base_bdevs": 4, 00:22:18.469 "num_base_bdevs_discovered": 3, 00:22:18.469 "num_base_bdevs_operational": 4, 00:22:18.469 "base_bdevs_list": [ 00:22:18.469 { 00:22:18.469 "name": "BaseBdev1", 00:22:18.469 "uuid": "abb48aae-43fc-45f3-902c-f09bb5d7a693", 00:22:18.469 "is_configured": true, 00:22:18.469 "data_offset": 2048, 00:22:18.469 "data_size": 63488 00:22:18.469 }, 00:22:18.469 { 00:22:18.469 "name": "BaseBdev2", 00:22:18.469 "uuid": "97706ba6-54d9-48db-8f3a-4231c14f4b95", 00:22:18.469 "is_configured": true, 00:22:18.469 "data_offset": 2048, 00:22:18.469 "data_size": 63488 00:22:18.469 }, 00:22:18.469 { 00:22:18.469 "name": "BaseBdev3", 00:22:18.469 "uuid": "401d7181-a5c1-4102-a2dd-2f1f89438765", 00:22:18.469 "is_configured": true, 00:22:18.469 "data_offset": 2048, 00:22:18.469 "data_size": 63488 00:22:18.469 }, 00:22:18.469 { 00:22:18.469 "name": "BaseBdev4", 00:22:18.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.469 "is_configured": false, 00:22:18.469 "data_offset": 0, 00:22:18.469 "data_size": 0 00:22:18.469 } 00:22:18.469 ] 00:22:18.469 }' 00:22:18.469 12:42:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.469 12:42:00 -- common/autotest_common.sh@10 -- # set +x 00:22:19.038 12:42:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:19.038 [2024-10-01 12:42:01.488422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:19.038 [2024-10-01 12:42:01.488610] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:22:19.038 [2024-10-01 12:42:01.488622] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:19.038 [2024-10-01 12:42:01.488729] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:19.038 [2024-10-01 12:42:01.489023] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:22:19.038 [2024-10-01 12:42:01.489042] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:22:19.038 [2024-10-01 12:42:01.489173] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.038 BaseBdev4 00:22:19.038 12:42:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:22:19.038 12:42:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:22:19.038 12:42:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:19.038 12:42:01 -- common/autotest_common.sh@889 -- # local i 00:22:19.038 12:42:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:19.038 12:42:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:19.038 12:42:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:19.313 12:42:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:19.611 [ 00:22:19.611 { 00:22:19.611 "name": "BaseBdev4", 00:22:19.611 "aliases": [ 00:22:19.611 "68efa2d5-27d3-4b5e-b05f-490144575ea0" 00:22:19.611 ], 00:22:19.611 "product_name": "Malloc disk", 00:22:19.611 "block_size": 512, 00:22:19.611 "num_blocks": 65536, 00:22:19.611 "uuid": "68efa2d5-27d3-4b5e-b05f-490144575ea0", 00:22:19.611 "assigned_rate_limits": { 00:22:19.611 "rw_ios_per_sec": 0, 00:22:19.611 "rw_mbytes_per_sec": 0, 00:22:19.611 "r_mbytes_per_sec": 0, 00:22:19.611 "w_mbytes_per_sec": 0 00:22:19.611 }, 00:22:19.611 "claimed": true, 00:22:19.611 "claim_type": "exclusive_write", 00:22:19.611 "zoned": false, 00:22:19.611 "supported_io_types": { 00:22:19.611 "read": true, 00:22:19.611 "write": true, 00:22:19.611 "unmap": true, 00:22:19.611 "write_zeroes": true, 00:22:19.611 "flush": true, 00:22:19.611 "reset": true, 00:22:19.611 "compare": false, 00:22:19.611 "compare_and_write": false, 00:22:19.611 "abort": true, 00:22:19.611 "nvme_admin": false, 00:22:19.611 "nvme_io": false 00:22:19.611 }, 00:22:19.611 "memory_domains": [ 00:22:19.611 { 00:22:19.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.611 "dma_device_type": 2 00:22:19.611 } 00:22:19.611 ], 00:22:19.611 "driver_specific": {} 00:22:19.611 } 00:22:19.611 ] 00:22:19.611 12:42:01 -- common/autotest_common.sh@895 -- # return 0 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.611 12:42:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.611 12:42:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:19.611 "name": "Existed_Raid", 00:22:19.611 "uuid": "2131f9f2-1e7c-4312-80f9-c0e86ff541f2", 00:22:19.611 "strip_size_kb": 0, 00:22:19.611 "state": "online", 00:22:19.611 "raid_level": "raid1", 00:22:19.611 "superblock": true, 00:22:19.611 "num_base_bdevs": 4, 00:22:19.611 "num_base_bdevs_discovered": 4, 00:22:19.611 "num_base_bdevs_operational": 4, 00:22:19.611 "base_bdevs_list": [ 00:22:19.611 { 00:22:19.611 "name": "BaseBdev1", 00:22:19.611 "uuid": "abb48aae-43fc-45f3-902c-f09bb5d7a693", 00:22:19.611 "is_configured": true, 00:22:19.611 "data_offset": 2048, 00:22:19.611 "data_size": 63488 00:22:19.611 }, 00:22:19.611 { 00:22:19.611 "name": "BaseBdev2", 00:22:19.611 "uuid": "97706ba6-54d9-48db-8f3a-4231c14f4b95", 00:22:19.611 "is_configured": true, 00:22:19.611 "data_offset": 2048, 00:22:19.611 "data_size": 63488 00:22:19.611 }, 00:22:19.611 { 00:22:19.611 "name": "BaseBdev3", 00:22:19.611 "uuid": "401d7181-a5c1-4102-a2dd-2f1f89438765", 00:22:19.611 "is_configured": true, 00:22:19.611 "data_offset": 2048, 00:22:19.611 "data_size": 63488 00:22:19.611 }, 00:22:19.611 { 00:22:19.611 "name": "BaseBdev4", 00:22:19.611 "uuid": "68efa2d5-27d3-4b5e-b05f-490144575ea0", 00:22:19.611 "is_configured": true, 00:22:19.611 "data_offset": 2048, 00:22:19.611 "data_size": 63488 00:22:19.611 } 00:22:19.611 ] 00:22:19.611 }' 00:22:19.611 12:42:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:19.611 12:42:02 -- common/autotest_common.sh@10 -- # set +x 00:22:20.180 12:42:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:20.440 [2024-10-01 12:42:02.734806] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.440 12:42:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.699 12:42:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:20.699 "name": "Existed_Raid", 00:22:20.699 "uuid": "2131f9f2-1e7c-4312-80f9-c0e86ff541f2", 00:22:20.699 "strip_size_kb": 0, 00:22:20.699 "state": "online", 00:22:20.699 "raid_level": "raid1", 00:22:20.699 "superblock": true, 00:22:20.699 "num_base_bdevs": 4, 00:22:20.699 "num_base_bdevs_discovered": 3, 00:22:20.699 "num_base_bdevs_operational": 3, 00:22:20.699 "base_bdevs_list": [ 00:22:20.699 { 00:22:20.699 "name": null, 00:22:20.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.699 "is_configured": false, 00:22:20.699 "data_offset": 2048, 00:22:20.699 "data_size": 63488 00:22:20.699 }, 00:22:20.699 { 00:22:20.699 "name": "BaseBdev2", 00:22:20.699 "uuid": "97706ba6-54d9-48db-8f3a-4231c14f4b95", 00:22:20.700 "is_configured": true, 00:22:20.700 "data_offset": 2048, 00:22:20.700 "data_size": 63488 00:22:20.700 }, 00:22:20.700 { 00:22:20.700 "name": "BaseBdev3", 00:22:20.700 "uuid": "401d7181-a5c1-4102-a2dd-2f1f89438765", 00:22:20.700 "is_configured": true, 00:22:20.700 "data_offset": 2048, 00:22:20.700 "data_size": 63488 00:22:20.700 }, 00:22:20.700 { 00:22:20.700 "name": "BaseBdev4", 00:22:20.700 "uuid": "68efa2d5-27d3-4b5e-b05f-490144575ea0", 00:22:20.700 "is_configured": true, 00:22:20.700 "data_offset": 2048, 00:22:20.700 "data_size": 63488 00:22:20.700 } 00:22:20.700 ] 00:22:20.700 }' 00:22:20.700 12:42:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:20.700 12:42:03 -- common/autotest_common.sh@10 -- # set +x 00:22:21.267 12:42:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:21.267 12:42:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:21.267 12:42:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.267 12:42:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:21.267 12:42:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:21.267 12:42:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:21.267 12:42:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:21.526 [2024-10-01 12:42:03.931667] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:21.526 12:42:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:21.526 12:42:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:21.526 12:42:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.526 12:42:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:21.785 12:42:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:21.785 12:42:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:21.785 12:42:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:22.043 [2024-10-01 12:42:04.393404] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:22.043 12:42:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:22.043 12:42:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:22.043 12:42:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.043 12:42:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:22.300 12:42:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:22.300 12:42:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:22.300 12:42:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:22.558 [2024-10-01 12:42:04.849554] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:22.558 [2024-10-01 12:42:04.849581] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.558 [2024-10-01 12:42:04.849658] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.558 [2024-10-01 12:42:04.929012] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.558 [2024-10-01 12:42:04.929047] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:22:22.559 12:42:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:22.559 12:42:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:22.559 12:42:04 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.559 12:42:04 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:22.816 12:42:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:22.816 12:42:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:22.816 12:42:05 -- bdev/bdev_raid.sh@287 -- # killprocess 121670 00:22:22.816 12:42:05 -- common/autotest_common.sh@926 -- # '[' -z 121670 ']' 00:22:22.816 12:42:05 -- common/autotest_common.sh@930 -- # kill -0 121670 00:22:22.816 12:42:05 -- common/autotest_common.sh@931 -- # uname 00:22:22.816 12:42:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:22.816 12:42:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121670 00:22:22.816 12:42:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:22.816 12:42:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:22.816 12:42:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121670' 00:22:22.816 killing process with pid 121670 00:22:22.816 12:42:05 -- common/autotest_common.sh@945 -- # kill 121670 00:22:22.816 [2024-10-01 12:42:05.171493] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:22.816 [2024-10-01 12:42:05.171595] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:22.816 12:42:05 -- common/autotest_common.sh@950 -- # wait 121670 00:22:23.749 ************************************ 00:22:23.749 END TEST raid_state_function_test_sb 00:22:23.749 ************************************ 00:22:23.749 12:42:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:23.749 00:22:23.749 real 0m12.852s 00:22:23.749 user 0m21.886s 00:22:23.749 sys 0m2.172s 00:22:23.749 12:42:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.749 12:42:06 -- common/autotest_common.sh@10 -- # set +x 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:22:24.008 12:42:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:22:24.008 12:42:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:24.008 12:42:06 -- common/autotest_common.sh@10 -- # set +x 00:22:24.008 ************************************ 00:22:24.008 START TEST raid_superblock_test 00:22:24.008 ************************************ 00:22:24.008 12:42:06 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@357 -- # raid_pid=122092 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:24.008 12:42:06 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122092 /var/tmp/spdk-raid.sock 00:22:24.008 12:42:06 -- common/autotest_common.sh@819 -- # '[' -z 122092 ']' 00:22:24.008 12:42:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:24.009 12:42:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:24.009 12:42:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:24.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:24.009 12:42:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:24.009 12:42:06 -- common/autotest_common.sh@10 -- # set +x 00:22:24.009 [2024-10-01 12:42:06.388357] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:24.009 [2024-10-01 12:42:06.388980] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122092 ] 00:22:24.266 [2024-10-01 12:42:06.554523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.267 [2024-10-01 12:42:06.704289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.524 [2024-10-01 12:42:06.854355] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:24.782 12:42:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:24.782 12:42:07 -- common/autotest_common.sh@852 -- # return 0 00:22:24.782 12:42:07 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:22:24.782 12:42:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:24.782 12:42:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:22:24.782 12:42:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:22:24.782 12:42:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:24.782 12:42:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:24.782 12:42:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:24.782 12:42:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:24.782 12:42:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:25.041 malloc1 00:22:25.041 12:42:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:25.041 [2024-10-01 12:42:07.564501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:25.041 [2024-10-01 12:42:07.564578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.041 [2024-10-01 12:42:07.564620] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:25.041 [2024-10-01 12:42:07.564663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.041 [2024-10-01 12:42:07.566764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.041 [2024-10-01 12:42:07.566814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:25.041 pt1 00:22:25.301 12:42:07 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:25.301 12:42:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:25.301 12:42:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:22:25.301 12:42:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:22:25.301 12:42:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:25.301 12:42:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:25.301 12:42:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:25.301 12:42:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:25.301 12:42:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:25.301 malloc2 00:22:25.301 12:42:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:25.560 [2024-10-01 12:42:08.004849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:25.560 [2024-10-01 12:42:08.004916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.560 [2024-10-01 12:42:08.004952] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:25.560 [2024-10-01 12:42:08.005000] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.560 [2024-10-01 12:42:08.007075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.560 [2024-10-01 12:42:08.007145] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:25.560 pt2 00:22:25.560 12:42:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:25.560 12:42:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:25.560 12:42:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:22:25.560 12:42:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:22:25.560 12:42:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:25.560 12:42:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:25.560 12:42:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:25.560 12:42:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:25.560 12:42:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:25.819 malloc3 00:22:25.819 12:42:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:26.077 [2024-10-01 12:42:08.417853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:26.077 [2024-10-01 12:42:08.417917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.077 [2024-10-01 12:42:08.417951] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:26.077 [2024-10-01 12:42:08.417986] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.077 [2024-10-01 12:42:08.420113] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.077 [2024-10-01 12:42:08.420166] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:26.077 pt3 00:22:26.077 12:42:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:26.077 12:42:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:26.077 12:42:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:22:26.077 12:42:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:22:26.077 12:42:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:26.077 12:42:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:26.077 12:42:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:26.077 12:42:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:26.077 12:42:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:26.336 malloc4 00:22:26.336 12:42:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:26.336 [2024-10-01 12:42:08.815358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:26.336 [2024-10-01 12:42:08.815425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.336 [2024-10-01 12:42:08.815452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:26.336 [2024-10-01 12:42:08.815488] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.336 [2024-10-01 12:42:08.817616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.336 [2024-10-01 12:42:08.817666] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:26.336 pt4 00:22:26.336 12:42:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:26.336 12:42:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:26.336 12:42:08 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:26.595 [2024-10-01 12:42:08.995136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:26.595 [2024-10-01 12:42:08.996918] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:26.595 [2024-10-01 12:42:08.996978] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:26.595 [2024-10-01 12:42:08.997018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:26.595 [2024-10-01 12:42:08.997180] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:22:26.595 [2024-10-01 12:42:08.997189] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:26.595 [2024-10-01 12:42:08.997315] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:26.595 [2024-10-01 12:42:08.997617] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:22:26.595 [2024-10-01 12:42:08.997635] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:22:26.595 [2024-10-01 12:42:08.997747] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.595 12:42:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.852 12:42:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.852 "name": "raid_bdev1", 00:22:26.852 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:26.852 "strip_size_kb": 0, 00:22:26.852 "state": "online", 00:22:26.852 "raid_level": "raid1", 00:22:26.852 "superblock": true, 00:22:26.852 "num_base_bdevs": 4, 00:22:26.852 "num_base_bdevs_discovered": 4, 00:22:26.852 "num_base_bdevs_operational": 4, 00:22:26.852 "base_bdevs_list": [ 00:22:26.852 { 00:22:26.852 "name": "pt1", 00:22:26.852 "uuid": "e3c59abf-4171-5dad-9293-20a2ac922cd7", 00:22:26.852 "is_configured": true, 00:22:26.852 "data_offset": 2048, 00:22:26.852 "data_size": 63488 00:22:26.852 }, 00:22:26.852 { 00:22:26.852 "name": "pt2", 00:22:26.852 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:26.852 "is_configured": true, 00:22:26.852 "data_offset": 2048, 00:22:26.852 "data_size": 63488 00:22:26.853 }, 00:22:26.853 { 00:22:26.853 "name": "pt3", 00:22:26.853 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:26.853 "is_configured": true, 00:22:26.853 "data_offset": 2048, 00:22:26.853 "data_size": 63488 00:22:26.853 }, 00:22:26.853 { 00:22:26.853 "name": "pt4", 00:22:26.853 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:26.853 "is_configured": true, 00:22:26.853 "data_offset": 2048, 00:22:26.853 "data_size": 63488 00:22:26.853 } 00:22:26.853 ] 00:22:26.853 }' 00:22:26.853 12:42:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.853 12:42:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.419 12:42:09 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:27.419 12:42:09 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:27.419 [2024-10-01 12:42:09.917886] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:27.420 12:42:09 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=41f4d11f-d080-4fea-a869-ea7a00ed80fd 00:22:27.420 12:42:09 -- bdev/bdev_raid.sh@380 -- # '[' -z 41f4d11f-d080-4fea-a869-ea7a00ed80fd ']' 00:22:27.420 12:42:09 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:27.679 [2024-10-01 12:42:10.105465] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:27.679 [2024-10-01 12:42:10.105492] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:27.679 [2024-10-01 12:42:10.105555] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:27.679 [2024-10-01 12:42:10.105623] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:27.679 [2024-10-01 12:42:10.105631] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:22:27.679 12:42:10 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.679 12:42:10 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:27.937 12:42:10 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:27.937 12:42:10 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:27.937 12:42:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:27.937 12:42:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:28.195 12:42:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:28.195 12:42:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:28.195 12:42:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:28.195 12:42:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:28.451 12:42:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:28.451 12:42:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:28.709 12:42:11 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:28.709 12:42:11 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:28.709 12:42:11 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:28.709 12:42:11 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:28.709 12:42:11 -- common/autotest_common.sh@640 -- # local es=0 00:22:28.709 12:42:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:28.709 12:42:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:28.709 12:42:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:28.709 12:42:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:28.709 12:42:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:28.709 12:42:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:28.709 12:42:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:28.709 12:42:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:28.709 12:42:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:28.709 12:42:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:28.967 [2024-10-01 12:42:11.363566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:28.967 [2024-10-01 12:42:11.365379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:28.967 [2024-10-01 12:42:11.365430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:28.967 [2024-10-01 12:42:11.365458] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:28.967 [2024-10-01 12:42:11.365497] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:28.967 [2024-10-01 12:42:11.365562] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:28.967 [2024-10-01 12:42:11.365587] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:28.967 [2024-10-01 12:42:11.365633] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:22:28.967 [2024-10-01 12:42:11.365653] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:28.967 [2024-10-01 12:42:11.365661] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:22:28.967 request: 00:22:28.967 { 00:22:28.967 "name": "raid_bdev1", 00:22:28.967 "raid_level": "raid1", 00:22:28.967 "base_bdevs": [ 00:22:28.967 "malloc1", 00:22:28.967 "malloc2", 00:22:28.967 "malloc3", 00:22:28.967 "malloc4" 00:22:28.967 ], 00:22:28.967 "superblock": false, 00:22:28.967 "method": "bdev_raid_create", 00:22:28.967 "req_id": 1 00:22:28.967 } 00:22:28.967 Got JSON-RPC error response 00:22:28.967 response: 00:22:28.967 { 00:22:28.967 "code": -17, 00:22:28.967 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:28.967 } 00:22:28.967 12:42:11 -- common/autotest_common.sh@643 -- # es=1 00:22:28.967 12:42:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:28.967 12:42:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:28.967 12:42:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:28.967 12:42:11 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.967 12:42:11 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:29.226 [2024-10-01 12:42:11.727014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:29.226 [2024-10-01 12:42:11.727076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.226 [2024-10-01 12:42:11.727119] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:29.226 [2024-10-01 12:42:11.727142] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.226 [2024-10-01 12:42:11.729277] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.226 [2024-10-01 12:42:11.729337] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:29.226 [2024-10-01 12:42:11.729435] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:29.226 [2024-10-01 12:42:11.729479] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:29.226 pt1 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:29.226 12:42:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:29.484 12:42:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.484 12:42:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.484 12:42:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:29.484 "name": "raid_bdev1", 00:22:29.484 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:29.484 "strip_size_kb": 0, 00:22:29.484 "state": "configuring", 00:22:29.484 "raid_level": "raid1", 00:22:29.484 "superblock": true, 00:22:29.484 "num_base_bdevs": 4, 00:22:29.484 "num_base_bdevs_discovered": 1, 00:22:29.484 "num_base_bdevs_operational": 4, 00:22:29.484 "base_bdevs_list": [ 00:22:29.484 { 00:22:29.484 "name": "pt1", 00:22:29.484 "uuid": "e3c59abf-4171-5dad-9293-20a2ac922cd7", 00:22:29.484 "is_configured": true, 00:22:29.484 "data_offset": 2048, 00:22:29.484 "data_size": 63488 00:22:29.484 }, 00:22:29.484 { 00:22:29.484 "name": null, 00:22:29.484 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:29.484 "is_configured": false, 00:22:29.484 "data_offset": 2048, 00:22:29.484 "data_size": 63488 00:22:29.484 }, 00:22:29.484 { 00:22:29.484 "name": null, 00:22:29.484 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:29.484 "is_configured": false, 00:22:29.484 "data_offset": 2048, 00:22:29.484 "data_size": 63488 00:22:29.484 }, 00:22:29.484 { 00:22:29.484 "name": null, 00:22:29.484 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:29.484 "is_configured": false, 00:22:29.484 "data_offset": 2048, 00:22:29.484 "data_size": 63488 00:22:29.484 } 00:22:29.484 ] 00:22:29.484 }' 00:22:29.484 12:42:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:29.484 12:42:11 -- common/autotest_common.sh@10 -- # set +x 00:22:30.052 12:42:12 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:22:30.052 12:42:12 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:30.310 [2024-10-01 12:42:12.641690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:30.310 [2024-10-01 12:42:12.641755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.310 [2024-10-01 12:42:12.641805] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:30.310 [2024-10-01 12:42:12.641824] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.310 [2024-10-01 12:42:12.642239] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.310 [2024-10-01 12:42:12.642284] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:30.310 [2024-10-01 12:42:12.642381] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:30.310 [2024-10-01 12:42:12.642405] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:30.310 pt2 00:22:30.310 12:42:12 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:30.310 [2024-10-01 12:42:12.817415] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.569 12:42:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.569 12:42:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:30.569 "name": "raid_bdev1", 00:22:30.569 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:30.569 "strip_size_kb": 0, 00:22:30.569 "state": "configuring", 00:22:30.569 "raid_level": "raid1", 00:22:30.569 "superblock": true, 00:22:30.569 "num_base_bdevs": 4, 00:22:30.569 "num_base_bdevs_discovered": 1, 00:22:30.569 "num_base_bdevs_operational": 4, 00:22:30.569 "base_bdevs_list": [ 00:22:30.569 { 00:22:30.569 "name": "pt1", 00:22:30.569 "uuid": "e3c59abf-4171-5dad-9293-20a2ac922cd7", 00:22:30.569 "is_configured": true, 00:22:30.569 "data_offset": 2048, 00:22:30.569 "data_size": 63488 00:22:30.569 }, 00:22:30.569 { 00:22:30.569 "name": null, 00:22:30.569 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:30.569 "is_configured": false, 00:22:30.569 "data_offset": 2048, 00:22:30.569 "data_size": 63488 00:22:30.569 }, 00:22:30.569 { 00:22:30.569 "name": null, 00:22:30.569 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:30.569 "is_configured": false, 00:22:30.569 "data_offset": 2048, 00:22:30.569 "data_size": 63488 00:22:30.569 }, 00:22:30.569 { 00:22:30.569 "name": null, 00:22:30.569 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:30.569 "is_configured": false, 00:22:30.569 "data_offset": 2048, 00:22:30.569 "data_size": 63488 00:22:30.569 } 00:22:30.569 ] 00:22:30.569 }' 00:22:30.569 12:42:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:30.569 12:42:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.137 12:42:13 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:31.137 12:42:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:31.137 12:42:13 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:31.396 [2024-10-01 12:42:13.720114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:31.396 [2024-10-01 12:42:13.720183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.396 [2024-10-01 12:42:13.720215] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:31.396 [2024-10-01 12:42:13.720234] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.396 [2024-10-01 12:42:13.720626] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.396 [2024-10-01 12:42:13.720681] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:31.396 [2024-10-01 12:42:13.720767] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:31.396 [2024-10-01 12:42:13.720790] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:31.396 pt2 00:22:31.396 12:42:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:31.396 12:42:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:31.396 12:42:13 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:31.396 [2024-10-01 12:42:13.899916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:31.396 [2024-10-01 12:42:13.899983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.396 [2024-10-01 12:42:13.900016] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:31.396 [2024-10-01 12:42:13.900042] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.396 [2024-10-01 12:42:13.900432] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.396 [2024-10-01 12:42:13.900483] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:31.396 [2024-10-01 12:42:13.900567] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:31.396 [2024-10-01 12:42:13.900585] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:31.396 pt3 00:22:31.396 12:42:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:31.396 12:42:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:31.396 12:42:13 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:31.656 [2024-10-01 12:42:14.079639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:31.656 [2024-10-01 12:42:14.079694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.656 [2024-10-01 12:42:14.079719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:31.656 [2024-10-01 12:42:14.079740] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.656 [2024-10-01 12:42:14.080098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.656 [2024-10-01 12:42:14.080146] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:31.656 [2024-10-01 12:42:14.080224] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:31.656 [2024-10-01 12:42:14.080246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:31.656 [2024-10-01 12:42:14.080362] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:22:31.656 [2024-10-01 12:42:14.080371] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:31.656 [2024-10-01 12:42:14.080454] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:31.656 [2024-10-01 12:42:14.080725] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:22:31.656 [2024-10-01 12:42:14.080743] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:22:31.656 [2024-10-01 12:42:14.080870] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.656 pt4 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.656 12:42:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.915 12:42:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:31.915 "name": "raid_bdev1", 00:22:31.915 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:31.915 "strip_size_kb": 0, 00:22:31.915 "state": "online", 00:22:31.915 "raid_level": "raid1", 00:22:31.915 "superblock": true, 00:22:31.915 "num_base_bdevs": 4, 00:22:31.915 "num_base_bdevs_discovered": 4, 00:22:31.915 "num_base_bdevs_operational": 4, 00:22:31.915 "base_bdevs_list": [ 00:22:31.915 { 00:22:31.915 "name": "pt1", 00:22:31.915 "uuid": "e3c59abf-4171-5dad-9293-20a2ac922cd7", 00:22:31.915 "is_configured": true, 00:22:31.915 "data_offset": 2048, 00:22:31.915 "data_size": 63488 00:22:31.915 }, 00:22:31.915 { 00:22:31.915 "name": "pt2", 00:22:31.915 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:31.915 "is_configured": true, 00:22:31.915 "data_offset": 2048, 00:22:31.915 "data_size": 63488 00:22:31.915 }, 00:22:31.915 { 00:22:31.915 "name": "pt3", 00:22:31.915 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:31.915 "is_configured": true, 00:22:31.915 "data_offset": 2048, 00:22:31.915 "data_size": 63488 00:22:31.915 }, 00:22:31.915 { 00:22:31.915 "name": "pt4", 00:22:31.915 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:31.915 "is_configured": true, 00:22:31.915 "data_offset": 2048, 00:22:31.915 "data_size": 63488 00:22:31.915 } 00:22:31.915 ] 00:22:31.915 }' 00:22:31.915 12:42:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:31.915 12:42:14 -- common/autotest_common.sh@10 -- # set +x 00:22:32.483 12:42:14 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:32.483 12:42:14 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:22:32.483 [2024-10-01 12:42:14.986528] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:32.483 12:42:14 -- bdev/bdev_raid.sh@430 -- # '[' 41f4d11f-d080-4fea-a869-ea7a00ed80fd '!=' 41f4d11f-d080-4fea-a869-ea7a00ed80fd ']' 00:22:32.483 12:42:14 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:22:32.483 12:42:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:32.483 12:42:14 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:32.483 12:42:14 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:32.742 [2024-10-01 12:42:15.190067] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.742 12:42:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.000 12:42:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:33.000 "name": "raid_bdev1", 00:22:33.000 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:33.000 "strip_size_kb": 0, 00:22:33.000 "state": "online", 00:22:33.000 "raid_level": "raid1", 00:22:33.000 "superblock": true, 00:22:33.000 "num_base_bdevs": 4, 00:22:33.000 "num_base_bdevs_discovered": 3, 00:22:33.000 "num_base_bdevs_operational": 3, 00:22:33.000 "base_bdevs_list": [ 00:22:33.000 { 00:22:33.000 "name": null, 00:22:33.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.000 "is_configured": false, 00:22:33.000 "data_offset": 2048, 00:22:33.000 "data_size": 63488 00:22:33.000 }, 00:22:33.000 { 00:22:33.000 "name": "pt2", 00:22:33.000 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:33.000 "is_configured": true, 00:22:33.000 "data_offset": 2048, 00:22:33.000 "data_size": 63488 00:22:33.000 }, 00:22:33.000 { 00:22:33.000 "name": "pt3", 00:22:33.000 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:33.000 "is_configured": true, 00:22:33.000 "data_offset": 2048, 00:22:33.000 "data_size": 63488 00:22:33.000 }, 00:22:33.000 { 00:22:33.000 "name": "pt4", 00:22:33.000 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:33.000 "is_configured": true, 00:22:33.000 "data_offset": 2048, 00:22:33.000 "data_size": 63488 00:22:33.000 } 00:22:33.000 ] 00:22:33.000 }' 00:22:33.000 12:42:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:33.000 12:42:15 -- common/autotest_common.sh@10 -- # set +x 00:22:33.568 12:42:15 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:33.827 [2024-10-01 12:42:16.112685] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:33.827 [2024-10-01 12:42:16.112710] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:33.827 [2024-10-01 12:42:16.112769] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.827 [2024-10-01 12:42:16.112830] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.827 [2024-10-01 12:42:16.112838] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:22:33.827 12:42:16 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.827 12:42:16 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:22:33.827 12:42:16 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:22:33.827 12:42:16 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:22:33.827 12:42:16 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:22:33.827 12:42:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:33.827 12:42:16 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:34.086 12:42:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:34.086 12:42:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:34.086 12:42:16 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:34.345 12:42:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:34.345 12:42:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:34.345 12:42:16 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:34.345 12:42:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:34.345 12:42:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:34.345 12:42:16 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:22:34.345 12:42:16 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:34.345 12:42:16 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:34.603 [2024-10-01 12:42:17.002969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:34.603 [2024-10-01 12:42:17.003036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.603 [2024-10-01 12:42:17.003063] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:34.603 [2024-10-01 12:42:17.003111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.603 [2024-10-01 12:42:17.005338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.603 [2024-10-01 12:42:17.005421] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:34.603 [2024-10-01 12:42:17.005524] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:34.603 [2024-10-01 12:42:17.005563] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:34.603 pt2 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.603 12:42:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.862 12:42:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.862 "name": "raid_bdev1", 00:22:34.862 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:34.862 "strip_size_kb": 0, 00:22:34.862 "state": "configuring", 00:22:34.862 "raid_level": "raid1", 00:22:34.862 "superblock": true, 00:22:34.862 "num_base_bdevs": 4, 00:22:34.862 "num_base_bdevs_discovered": 1, 00:22:34.862 "num_base_bdevs_operational": 3, 00:22:34.862 "base_bdevs_list": [ 00:22:34.862 { 00:22:34.862 "name": null, 00:22:34.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.862 "is_configured": false, 00:22:34.862 "data_offset": 2048, 00:22:34.862 "data_size": 63488 00:22:34.862 }, 00:22:34.862 { 00:22:34.862 "name": "pt2", 00:22:34.862 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:34.862 "is_configured": true, 00:22:34.862 "data_offset": 2048, 00:22:34.862 "data_size": 63488 00:22:34.862 }, 00:22:34.862 { 00:22:34.862 "name": null, 00:22:34.862 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:34.862 "is_configured": false, 00:22:34.862 "data_offset": 2048, 00:22:34.862 "data_size": 63488 00:22:34.862 }, 00:22:34.862 { 00:22:34.862 "name": null, 00:22:34.862 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:34.862 "is_configured": false, 00:22:34.862 "data_offset": 2048, 00:22:34.862 "data_size": 63488 00:22:34.862 } 00:22:34.862 ] 00:22:34.862 }' 00:22:34.862 12:42:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.862 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:35.485 [2024-10-01 12:42:17.861736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:35.485 [2024-10-01 12:42:17.861797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.485 [2024-10-01 12:42:17.861848] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:35.485 [2024-10-01 12:42:17.861868] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.485 [2024-10-01 12:42:17.862280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.485 [2024-10-01 12:42:17.862328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:35.485 [2024-10-01 12:42:17.862422] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:35.485 [2024-10-01 12:42:17.862440] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:35.485 pt3 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.485 12:42:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.756 12:42:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:35.756 "name": "raid_bdev1", 00:22:35.756 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:35.756 "strip_size_kb": 0, 00:22:35.756 "state": "configuring", 00:22:35.756 "raid_level": "raid1", 00:22:35.756 "superblock": true, 00:22:35.756 "num_base_bdevs": 4, 00:22:35.756 "num_base_bdevs_discovered": 2, 00:22:35.756 "num_base_bdevs_operational": 3, 00:22:35.756 "base_bdevs_list": [ 00:22:35.756 { 00:22:35.756 "name": null, 00:22:35.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.756 "is_configured": false, 00:22:35.756 "data_offset": 2048, 00:22:35.756 "data_size": 63488 00:22:35.756 }, 00:22:35.756 { 00:22:35.756 "name": "pt2", 00:22:35.756 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:35.756 "is_configured": true, 00:22:35.756 "data_offset": 2048, 00:22:35.756 "data_size": 63488 00:22:35.756 }, 00:22:35.756 { 00:22:35.756 "name": "pt3", 00:22:35.756 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:35.756 "is_configured": true, 00:22:35.756 "data_offset": 2048, 00:22:35.756 "data_size": 63488 00:22:35.756 }, 00:22:35.756 { 00:22:35.756 "name": null, 00:22:35.756 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:35.756 "is_configured": false, 00:22:35.756 "data_offset": 2048, 00:22:35.756 "data_size": 63488 00:22:35.756 } 00:22:35.756 ] 00:22:35.756 }' 00:22:35.756 12:42:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:35.756 12:42:18 -- common/autotest_common.sh@10 -- # set +x 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@462 -- # i=3 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:36.325 [2024-10-01 12:42:18.780407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:36.325 [2024-10-01 12:42:18.780473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.325 [2024-10-01 12:42:18.780525] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:36.325 [2024-10-01 12:42:18.780545] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.325 [2024-10-01 12:42:18.780934] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.325 [2024-10-01 12:42:18.780969] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:36.325 [2024-10-01 12:42:18.781062] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:36.325 [2024-10-01 12:42:18.781080] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:36.325 [2024-10-01 12:42:18.781182] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:22:36.325 [2024-10-01 12:42:18.781190] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:36.325 [2024-10-01 12:42:18.781300] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:22:36.325 [2024-10-01 12:42:18.781604] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:22:36.325 [2024-10-01 12:42:18.781623] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:22:36.325 [2024-10-01 12:42:18.781759] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.325 pt4 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.325 12:42:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.585 12:42:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:36.585 "name": "raid_bdev1", 00:22:36.585 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:36.585 "strip_size_kb": 0, 00:22:36.585 "state": "online", 00:22:36.585 "raid_level": "raid1", 00:22:36.585 "superblock": true, 00:22:36.585 "num_base_bdevs": 4, 00:22:36.585 "num_base_bdevs_discovered": 3, 00:22:36.585 "num_base_bdevs_operational": 3, 00:22:36.585 "base_bdevs_list": [ 00:22:36.585 { 00:22:36.585 "name": null, 00:22:36.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.585 "is_configured": false, 00:22:36.585 "data_offset": 2048, 00:22:36.585 "data_size": 63488 00:22:36.585 }, 00:22:36.585 { 00:22:36.585 "name": "pt2", 00:22:36.585 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:36.585 "is_configured": true, 00:22:36.585 "data_offset": 2048, 00:22:36.585 "data_size": 63488 00:22:36.585 }, 00:22:36.585 { 00:22:36.585 "name": "pt3", 00:22:36.585 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:36.585 "is_configured": true, 00:22:36.585 "data_offset": 2048, 00:22:36.585 "data_size": 63488 00:22:36.585 }, 00:22:36.585 { 00:22:36.585 "name": "pt4", 00:22:36.585 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:36.585 "is_configured": true, 00:22:36.585 "data_offset": 2048, 00:22:36.585 "data_size": 63488 00:22:36.585 } 00:22:36.585 ] 00:22:36.585 }' 00:22:36.585 12:42:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:36.585 12:42:18 -- common/autotest_common.sh@10 -- # set +x 00:22:37.153 12:42:19 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:22:37.153 12:42:19 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:37.153 [2024-10-01 12:42:19.655172] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.153 [2024-10-01 12:42:19.655195] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.153 [2024-10-01 12:42:19.655244] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.153 [2024-10-01 12:42:19.655300] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.153 [2024-10-01 12:42:19.655308] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:22:37.153 12:42:19 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.153 12:42:19 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:22:37.412 12:42:19 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:22:37.412 12:42:19 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:22:37.412 12:42:19 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:37.671 [2024-10-01 12:42:20.030815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:37.671 [2024-10-01 12:42:20.032102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.671 [2024-10-01 12:42:20.032173] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:37.671 [2024-10-01 12:42:20.032296] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.671 [2024-10-01 12:42:20.034497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.671 [2024-10-01 12:42:20.034661] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:37.671 [2024-10-01 12:42:20.034831] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:37.671 [2024-10-01 12:42:20.034898] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:37.671 pt1 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.671 12:42:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.930 12:42:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:37.930 "name": "raid_bdev1", 00:22:37.930 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:37.931 "strip_size_kb": 0, 00:22:37.931 "state": "configuring", 00:22:37.931 "raid_level": "raid1", 00:22:37.931 "superblock": true, 00:22:37.931 "num_base_bdevs": 4, 00:22:37.931 "num_base_bdevs_discovered": 1, 00:22:37.931 "num_base_bdevs_operational": 4, 00:22:37.931 "base_bdevs_list": [ 00:22:37.931 { 00:22:37.931 "name": "pt1", 00:22:37.931 "uuid": "e3c59abf-4171-5dad-9293-20a2ac922cd7", 00:22:37.931 "is_configured": true, 00:22:37.931 "data_offset": 2048, 00:22:37.931 "data_size": 63488 00:22:37.931 }, 00:22:37.931 { 00:22:37.931 "name": null, 00:22:37.931 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:37.931 "is_configured": false, 00:22:37.931 "data_offset": 2048, 00:22:37.931 "data_size": 63488 00:22:37.931 }, 00:22:37.931 { 00:22:37.931 "name": null, 00:22:37.931 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:37.931 "is_configured": false, 00:22:37.931 "data_offset": 2048, 00:22:37.931 "data_size": 63488 00:22:37.931 }, 00:22:37.931 { 00:22:37.931 "name": null, 00:22:37.931 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:37.931 "is_configured": false, 00:22:37.931 "data_offset": 2048, 00:22:37.931 "data_size": 63488 00:22:37.931 } 00:22:37.931 ] 00:22:37.931 }' 00:22:37.931 12:42:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:37.931 12:42:20 -- common/autotest_common.sh@10 -- # set +x 00:22:38.499 12:42:20 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:22:38.499 12:42:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:38.499 12:42:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:38.499 12:42:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:38.499 12:42:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:38.499 12:42:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:38.757 12:42:21 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:38.757 12:42:21 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:38.757 12:42:21 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:38.757 12:42:21 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:38.757 12:42:21 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:38.757 12:42:21 -- bdev/bdev_raid.sh@489 -- # i=3 00:22:38.757 12:42:21 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:39.016 [2024-10-01 12:42:21.448809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:39.016 [2024-10-01 12:42:21.449001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.016 [2024-10-01 12:42:21.449059] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:22:39.016 [2024-10-01 12:42:21.449185] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.016 [2024-10-01 12:42:21.449607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.016 [2024-10-01 12:42:21.449754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:39.016 [2024-10-01 12:42:21.449926] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:39.016 [2024-10-01 12:42:21.449994] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:39.016 [2024-10-01 12:42:21.450081] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:39.016 [2024-10-01 12:42:21.450124] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:22:39.016 [2024-10-01 12:42:21.450248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:39.016 pt4 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.016 12:42:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.275 12:42:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.275 "name": "raid_bdev1", 00:22:39.275 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:39.275 "strip_size_kb": 0, 00:22:39.275 "state": "configuring", 00:22:39.275 "raid_level": "raid1", 00:22:39.275 "superblock": true, 00:22:39.275 "num_base_bdevs": 4, 00:22:39.275 "num_base_bdevs_discovered": 1, 00:22:39.275 "num_base_bdevs_operational": 3, 00:22:39.275 "base_bdevs_list": [ 00:22:39.275 { 00:22:39.276 "name": null, 00:22:39.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.276 "is_configured": false, 00:22:39.276 "data_offset": 2048, 00:22:39.276 "data_size": 63488 00:22:39.276 }, 00:22:39.276 { 00:22:39.276 "name": null, 00:22:39.276 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:39.276 "is_configured": false, 00:22:39.276 "data_offset": 2048, 00:22:39.276 "data_size": 63488 00:22:39.276 }, 00:22:39.276 { 00:22:39.276 "name": null, 00:22:39.276 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:39.276 "is_configured": false, 00:22:39.276 "data_offset": 2048, 00:22:39.276 "data_size": 63488 00:22:39.276 }, 00:22:39.276 { 00:22:39.276 "name": "pt4", 00:22:39.276 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:39.276 "is_configured": true, 00:22:39.276 "data_offset": 2048, 00:22:39.276 "data_size": 63488 00:22:39.276 } 00:22:39.276 ] 00:22:39.276 }' 00:22:39.276 12:42:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.276 12:42:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.844 12:42:22 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:22:39.844 12:42:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:39.844 12:42:22 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:39.844 [2024-10-01 12:42:22.351476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:39.844 [2024-10-01 12:42:22.351690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.844 [2024-10-01 12:42:22.351751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:22:39.844 [2024-10-01 12:42:22.351894] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.844 [2024-10-01 12:42:22.352317] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.844 [2024-10-01 12:42:22.352468] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:39.844 [2024-10-01 12:42:22.352578] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:39.844 [2024-10-01 12:42:22.352618] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:39.844 pt2 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:40.103 [2024-10-01 12:42:22.543204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:40.103 [2024-10-01 12:42:22.543368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.103 [2024-10-01 12:42:22.543439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:22:40.103 [2024-10-01 12:42:22.543578] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.103 [2024-10-01 12:42:22.543955] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.103 [2024-10-01 12:42:22.544091] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:40.103 [2024-10-01 12:42:22.544249] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:40.103 [2024-10-01 12:42:22.544339] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:40.103 [2024-10-01 12:42:22.544479] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:22:40.103 [2024-10-01 12:42:22.544669] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:40.103 [2024-10-01 12:42:22.544793] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:22:40.103 [2024-10-01 12:42:22.545152] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:22:40.103 [2024-10-01 12:42:22.545249] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:22:40.103 [2024-10-01 12:42:22.545436] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.103 pt3 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.103 12:42:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.362 12:42:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:40.362 "name": "raid_bdev1", 00:22:40.362 "uuid": "41f4d11f-d080-4fea-a869-ea7a00ed80fd", 00:22:40.362 "strip_size_kb": 0, 00:22:40.362 "state": "online", 00:22:40.362 "raid_level": "raid1", 00:22:40.362 "superblock": true, 00:22:40.362 "num_base_bdevs": 4, 00:22:40.362 "num_base_bdevs_discovered": 3, 00:22:40.362 "num_base_bdevs_operational": 3, 00:22:40.362 "base_bdevs_list": [ 00:22:40.362 { 00:22:40.362 "name": null, 00:22:40.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.362 "is_configured": false, 00:22:40.362 "data_offset": 2048, 00:22:40.362 "data_size": 63488 00:22:40.362 }, 00:22:40.362 { 00:22:40.362 "name": "pt2", 00:22:40.362 "uuid": "37119254-aa49-59cd-8d2e-46567b107845", 00:22:40.362 "is_configured": true, 00:22:40.362 "data_offset": 2048, 00:22:40.362 "data_size": 63488 00:22:40.362 }, 00:22:40.362 { 00:22:40.362 "name": "pt3", 00:22:40.362 "uuid": "0f3540a5-b840-5a45-a7a2-8ca9226aa97a", 00:22:40.362 "is_configured": true, 00:22:40.362 "data_offset": 2048, 00:22:40.362 "data_size": 63488 00:22:40.362 }, 00:22:40.362 { 00:22:40.362 "name": "pt4", 00:22:40.362 "uuid": "0bfbe006-5905-579a-9789-45d9d43a5060", 00:22:40.362 "is_configured": true, 00:22:40.362 "data_offset": 2048, 00:22:40.362 "data_size": 63488 00:22:40.362 } 00:22:40.362 ] 00:22:40.362 }' 00:22:40.362 12:42:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:40.362 12:42:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.931 12:42:23 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:40.931 12:42:23 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:22:40.931 [2024-10-01 12:42:23.382159] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:40.931 12:42:23 -- bdev/bdev_raid.sh@506 -- # '[' 41f4d11f-d080-4fea-a869-ea7a00ed80fd '!=' 41f4d11f-d080-4fea-a869-ea7a00ed80fd ']' 00:22:40.931 12:42:23 -- bdev/bdev_raid.sh@511 -- # killprocess 122092 00:22:40.931 12:42:23 -- common/autotest_common.sh@926 -- # '[' -z 122092 ']' 00:22:40.931 12:42:23 -- common/autotest_common.sh@930 -- # kill -0 122092 00:22:40.931 12:42:23 -- common/autotest_common.sh@931 -- # uname 00:22:40.931 12:42:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:40.931 12:42:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122092 00:22:40.931 12:42:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:40.931 12:42:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:40.931 12:42:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122092' 00:22:40.931 killing process with pid 122092 00:22:40.931 12:42:23 -- common/autotest_common.sh@945 -- # kill 122092 00:22:40.931 12:42:23 -- common/autotest_common.sh@950 -- # wait 122092 00:22:40.931 [2024-10-01 12:42:23.428533] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:40.931 [2024-10-01 12:42:23.428602] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.931 [2024-10-01 12:42:23.428662] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.931 [2024-10-01 12:42:23.428706] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:22:41.500 [2024-10-01 12:42:23.731744] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:42.439 ************************************ 00:22:42.439 END TEST raid_superblock_test 00:22:42.439 ************************************ 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@513 -- # return 0 00:22:42.439 00:22:42.439 real 0m18.474s 00:22:42.439 user 0m32.893s 00:22:42.439 sys 0m2.921s 00:22:42.439 12:42:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.439 12:42:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:22:42.439 12:42:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:42.439 12:42:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:42.439 12:42:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.439 ************************************ 00:22:42.439 START TEST raid_rebuild_test 00:22:42.439 ************************************ 00:22:42.439 12:42:24 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@544 -- # raid_pid=122729 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:42.439 12:42:24 -- bdev/bdev_raid.sh@545 -- # waitforlisten 122729 /var/tmp/spdk-raid.sock 00:22:42.439 12:42:24 -- common/autotest_common.sh@819 -- # '[' -z 122729 ']' 00:22:42.439 12:42:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:42.439 12:42:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:42.439 12:42:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:42.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:42.439 12:42:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:42.439 12:42:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.439 [2024-10-01 12:42:24.959617] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:42.439 [2024-10-01 12:42:24.959909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122729 ] 00:22:42.439 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:42.439 Zero copy mechanism will not be used. 00:22:42.699 [2024-10-01 12:42:25.125090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.959 [2024-10-01 12:42:25.319526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.219 [2024-10-01 12:42:25.546653] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:43.478 12:42:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:43.478 12:42:25 -- common/autotest_common.sh@852 -- # return 0 00:22:43.478 12:42:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:43.478 12:42:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:43.478 12:42:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:43.478 BaseBdev1 00:22:43.478 12:42:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:43.478 12:42:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:43.478 12:42:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:43.737 BaseBdev2 00:22:43.737 12:42:26 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:43.996 spare_malloc 00:22:43.996 12:42:26 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:44.255 spare_delay 00:22:44.256 12:42:26 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:44.256 [2024-10-01 12:42:26.784194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:44.256 [2024-10-01 12:42:26.784758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.256 [2024-10-01 12:42:26.784992] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:22:44.256 [2024-10-01 12:42:26.785223] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.256 [2024-10-01 12:42:26.787615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.256 [2024-10-01 12:42:26.787852] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:44.515 spare 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:44.515 [2024-10-01 12:42:26.972054] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:44.515 [2024-10-01 12:42:26.974048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:44.515 [2024-10-01 12:42:26.974250] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:22:44.515 [2024-10-01 12:42:26.974294] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:44.515 [2024-10-01 12:42:26.974489] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:22:44.515 [2024-10-01 12:42:26.974904] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:22:44.515 [2024-10-01 12:42:26.975004] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:22:44.515 [2024-10-01 12:42:26.975220] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.515 12:42:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.774 12:42:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.774 "name": "raid_bdev1", 00:22:44.774 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:44.774 "strip_size_kb": 0, 00:22:44.774 "state": "online", 00:22:44.774 "raid_level": "raid1", 00:22:44.774 "superblock": false, 00:22:44.774 "num_base_bdevs": 2, 00:22:44.774 "num_base_bdevs_discovered": 2, 00:22:44.774 "num_base_bdevs_operational": 2, 00:22:44.774 "base_bdevs_list": [ 00:22:44.774 { 00:22:44.774 "name": "BaseBdev1", 00:22:44.774 "uuid": "87bac104-b293-464b-b746-e0815b835e83", 00:22:44.774 "is_configured": true, 00:22:44.774 "data_offset": 0, 00:22:44.774 "data_size": 65536 00:22:44.774 }, 00:22:44.774 { 00:22:44.774 "name": "BaseBdev2", 00:22:44.774 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:44.774 "is_configured": true, 00:22:44.774 "data_offset": 0, 00:22:44.774 "data_size": 65536 00:22:44.774 } 00:22:44.774 ] 00:22:44.774 }' 00:22:44.774 12:42:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.774 12:42:27 -- common/autotest_common.sh@10 -- # set +x 00:22:45.342 12:42:27 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:45.342 12:42:27 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:45.342 [2024-10-01 12:42:27.871000] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:45.602 12:42:27 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:45.602 12:42:27 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:45.602 12:42:27 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.602 12:42:28 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:45.602 12:42:28 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:45.602 12:42:28 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:45.602 12:42:28 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:45.602 12:42:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:45.602 12:42:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:45.602 12:42:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:45.603 12:42:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:45.603 12:42:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:45.603 12:42:28 -- bdev/nbd_common.sh@12 -- # local i 00:22:45.603 12:42:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:45.603 12:42:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:45.603 12:42:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:45.862 [2024-10-01 12:42:28.246249] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:45.862 /dev/nbd0 00:22:45.862 12:42:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:45.862 12:42:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:45.862 12:42:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:45.862 12:42:28 -- common/autotest_common.sh@857 -- # local i 00:22:45.862 12:42:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:45.862 12:42:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:45.862 12:42:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:45.862 12:42:28 -- common/autotest_common.sh@861 -- # break 00:22:45.862 12:42:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:45.862 12:42:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:45.862 12:42:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:45.862 1+0 records in 00:22:45.862 1+0 records out 00:22:45.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471031 s, 8.7 MB/s 00:22:45.862 12:42:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:45.862 12:42:28 -- common/autotest_common.sh@874 -- # size=4096 00:22:45.862 12:42:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:45.862 12:42:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:45.862 12:42:28 -- common/autotest_common.sh@877 -- # return 0 00:22:45.862 12:42:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:45.862 12:42:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:45.862 12:42:28 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:45.862 12:42:28 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:45.862 12:42:28 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:50.051 65536+0 records in 00:22:50.051 65536+0 records out 00:22:50.051 33554432 bytes (34 MB, 32 MiB) copied, 3.87104 s, 8.7 MB/s 00:22:50.051 12:42:32 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@51 -- # local i 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:50.051 [2024-10-01 12:42:32.418555] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@41 -- # break 00:22:50.051 12:42:32 -- bdev/nbd_common.sh@45 -- # return 0 00:22:50.051 12:42:32 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:50.311 [2024-10-01 12:42:32.589946] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:50.311 "name": "raid_bdev1", 00:22:50.311 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:50.311 "strip_size_kb": 0, 00:22:50.311 "state": "online", 00:22:50.311 "raid_level": "raid1", 00:22:50.311 "superblock": false, 00:22:50.311 "num_base_bdevs": 2, 00:22:50.311 "num_base_bdevs_discovered": 1, 00:22:50.311 "num_base_bdevs_operational": 1, 00:22:50.311 "base_bdevs_list": [ 00:22:50.311 { 00:22:50.311 "name": null, 00:22:50.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.311 "is_configured": false, 00:22:50.311 "data_offset": 0, 00:22:50.311 "data_size": 65536 00:22:50.311 }, 00:22:50.311 { 00:22:50.311 "name": "BaseBdev2", 00:22:50.311 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:50.311 "is_configured": true, 00:22:50.311 "data_offset": 0, 00:22:50.311 "data_size": 65536 00:22:50.311 } 00:22:50.311 ] 00:22:50.311 }' 00:22:50.311 12:42:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:50.311 12:42:32 -- common/autotest_common.sh@10 -- # set +x 00:22:50.878 12:42:33 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:51.136 [2024-10-01 12:42:33.468613] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:51.136 [2024-10-01 12:42:33.468822] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:51.136 [2024-10-01 12:42:33.487944] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09550 00:22:51.136 [2024-10-01 12:42:33.490281] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:51.136 12:42:33 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:52.074 12:42:34 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.074 12:42:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.074 12:42:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:52.074 12:42:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:52.074 12:42:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.074 12:42:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.074 12:42:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.334 12:42:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.334 "name": "raid_bdev1", 00:22:52.334 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:52.334 "strip_size_kb": 0, 00:22:52.334 "state": "online", 00:22:52.334 "raid_level": "raid1", 00:22:52.334 "superblock": false, 00:22:52.334 "num_base_bdevs": 2, 00:22:52.334 "num_base_bdevs_discovered": 2, 00:22:52.334 "num_base_bdevs_operational": 2, 00:22:52.334 "process": { 00:22:52.334 "type": "rebuild", 00:22:52.334 "target": "spare", 00:22:52.334 "progress": { 00:22:52.334 "blocks": 22528, 00:22:52.334 "percent": 34 00:22:52.334 } 00:22:52.334 }, 00:22:52.334 "base_bdevs_list": [ 00:22:52.334 { 00:22:52.334 "name": "spare", 00:22:52.334 "uuid": "24e05ed9-08df-5dc4-bc0d-b5dd5d85ae8d", 00:22:52.334 "is_configured": true, 00:22:52.334 "data_offset": 0, 00:22:52.334 "data_size": 65536 00:22:52.334 }, 00:22:52.334 { 00:22:52.334 "name": "BaseBdev2", 00:22:52.334 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:52.334 "is_configured": true, 00:22:52.334 "data_offset": 0, 00:22:52.334 "data_size": 65536 00:22:52.334 } 00:22:52.334 ] 00:22:52.334 }' 00:22:52.334 12:42:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.334 12:42:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.334 12:42:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.334 12:42:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.334 12:42:34 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:52.593 [2024-10-01 12:42:34.937913] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:52.593 [2024-10-01 12:42:34.999097] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:52.593 [2024-10-01 12:42:34.999753] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.593 12:42:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.852 12:42:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:52.852 "name": "raid_bdev1", 00:22:52.852 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:52.852 "strip_size_kb": 0, 00:22:52.852 "state": "online", 00:22:52.852 "raid_level": "raid1", 00:22:52.852 "superblock": false, 00:22:52.852 "num_base_bdevs": 2, 00:22:52.852 "num_base_bdevs_discovered": 1, 00:22:52.852 "num_base_bdevs_operational": 1, 00:22:52.852 "base_bdevs_list": [ 00:22:52.852 { 00:22:52.852 "name": null, 00:22:52.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.852 "is_configured": false, 00:22:52.852 "data_offset": 0, 00:22:52.852 "data_size": 65536 00:22:52.852 }, 00:22:52.852 { 00:22:52.852 "name": "BaseBdev2", 00:22:52.852 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:52.852 "is_configured": true, 00:22:52.852 "data_offset": 0, 00:22:52.852 "data_size": 65536 00:22:52.852 } 00:22:52.852 ] 00:22:52.852 }' 00:22:52.853 12:42:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:52.853 12:42:35 -- common/autotest_common.sh@10 -- # set +x 00:22:53.421 12:42:35 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:53.421 12:42:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:53.421 12:42:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:53.421 12:42:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:53.421 12:42:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:53.421 12:42:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.421 12:42:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.421 12:42:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:53.421 "name": "raid_bdev1", 00:22:53.421 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:53.421 "strip_size_kb": 0, 00:22:53.421 "state": "online", 00:22:53.421 "raid_level": "raid1", 00:22:53.421 "superblock": false, 00:22:53.421 "num_base_bdevs": 2, 00:22:53.421 "num_base_bdevs_discovered": 1, 00:22:53.421 "num_base_bdevs_operational": 1, 00:22:53.421 "base_bdevs_list": [ 00:22:53.421 { 00:22:53.421 "name": null, 00:22:53.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.421 "is_configured": false, 00:22:53.421 "data_offset": 0, 00:22:53.421 "data_size": 65536 00:22:53.421 }, 00:22:53.421 { 00:22:53.421 "name": "BaseBdev2", 00:22:53.421 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:53.421 "is_configured": true, 00:22:53.421 "data_offset": 0, 00:22:53.421 "data_size": 65536 00:22:53.421 } 00:22:53.421 ] 00:22:53.421 }' 00:22:53.421 12:42:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:53.421 12:42:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:53.681 12:42:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:53.681 12:42:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:53.681 12:42:35 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:53.681 [2024-10-01 12:42:36.155622] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:53.681 [2024-10-01 12:42:36.155797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.681 [2024-10-01 12:42:36.172869] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:22:53.681 [2024-10-01 12:42:36.175131] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.681 12:42:36 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:55.112 12:42:37 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.112 12:42:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:55.112 12:42:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:55.112 12:42:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:55.112 12:42:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:55.112 12:42:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.112 12:42:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.112 12:42:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:55.112 "name": "raid_bdev1", 00:22:55.112 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:55.112 "strip_size_kb": 0, 00:22:55.112 "state": "online", 00:22:55.112 "raid_level": "raid1", 00:22:55.112 "superblock": false, 00:22:55.112 "num_base_bdevs": 2, 00:22:55.112 "num_base_bdevs_discovered": 2, 00:22:55.112 "num_base_bdevs_operational": 2, 00:22:55.112 "process": { 00:22:55.112 "type": "rebuild", 00:22:55.112 "target": "spare", 00:22:55.112 "progress": { 00:22:55.112 "blocks": 22528, 00:22:55.112 "percent": 34 00:22:55.112 } 00:22:55.112 }, 00:22:55.112 "base_bdevs_list": [ 00:22:55.112 { 00:22:55.112 "name": "spare", 00:22:55.112 "uuid": "24e05ed9-08df-5dc4-bc0d-b5dd5d85ae8d", 00:22:55.112 "is_configured": true, 00:22:55.112 "data_offset": 0, 00:22:55.112 "data_size": 65536 00:22:55.112 }, 00:22:55.112 { 00:22:55.112 "name": "BaseBdev2", 00:22:55.112 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:55.112 "is_configured": true, 00:22:55.112 "data_offset": 0, 00:22:55.112 "data_size": 65536 00:22:55.112 } 00:22:55.112 ] 00:22:55.112 }' 00:22:55.112 12:42:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@657 -- # local timeout=344 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.113 12:42:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.371 12:42:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:55.371 "name": "raid_bdev1", 00:22:55.371 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:55.371 "strip_size_kb": 0, 00:22:55.371 "state": "online", 00:22:55.371 "raid_level": "raid1", 00:22:55.371 "superblock": false, 00:22:55.371 "num_base_bdevs": 2, 00:22:55.371 "num_base_bdevs_discovered": 2, 00:22:55.371 "num_base_bdevs_operational": 2, 00:22:55.371 "process": { 00:22:55.371 "type": "rebuild", 00:22:55.371 "target": "spare", 00:22:55.371 "progress": { 00:22:55.371 "blocks": 28672, 00:22:55.371 "percent": 43 00:22:55.371 } 00:22:55.371 }, 00:22:55.371 "base_bdevs_list": [ 00:22:55.371 { 00:22:55.371 "name": "spare", 00:22:55.371 "uuid": "24e05ed9-08df-5dc4-bc0d-b5dd5d85ae8d", 00:22:55.371 "is_configured": true, 00:22:55.371 "data_offset": 0, 00:22:55.371 "data_size": 65536 00:22:55.371 }, 00:22:55.371 { 00:22:55.371 "name": "BaseBdev2", 00:22:55.371 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:55.371 "is_configured": true, 00:22:55.371 "data_offset": 0, 00:22:55.371 "data_size": 65536 00:22:55.371 } 00:22:55.371 ] 00:22:55.371 }' 00:22:55.371 12:42:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:55.371 12:42:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.371 12:42:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:55.371 12:42:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.371 12:42:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:56.304 12:42:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:56.304 12:42:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.304 12:42:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:56.304 12:42:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:56.304 12:42:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:56.304 12:42:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:56.304 12:42:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.304 12:42:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.563 12:42:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:56.563 "name": "raid_bdev1", 00:22:56.563 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:56.563 "strip_size_kb": 0, 00:22:56.563 "state": "online", 00:22:56.563 "raid_level": "raid1", 00:22:56.563 "superblock": false, 00:22:56.563 "num_base_bdevs": 2, 00:22:56.563 "num_base_bdevs_discovered": 2, 00:22:56.563 "num_base_bdevs_operational": 2, 00:22:56.563 "process": { 00:22:56.563 "type": "rebuild", 00:22:56.563 "target": "spare", 00:22:56.563 "progress": { 00:22:56.563 "blocks": 53248, 00:22:56.563 "percent": 81 00:22:56.563 } 00:22:56.563 }, 00:22:56.563 "base_bdevs_list": [ 00:22:56.563 { 00:22:56.563 "name": "spare", 00:22:56.563 "uuid": "24e05ed9-08df-5dc4-bc0d-b5dd5d85ae8d", 00:22:56.563 "is_configured": true, 00:22:56.563 "data_offset": 0, 00:22:56.563 "data_size": 65536 00:22:56.563 }, 00:22:56.563 { 00:22:56.563 "name": "BaseBdev2", 00:22:56.563 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:56.563 "is_configured": true, 00:22:56.563 "data_offset": 0, 00:22:56.563 "data_size": 65536 00:22:56.563 } 00:22:56.563 ] 00:22:56.563 }' 00:22:56.563 12:42:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:56.563 12:42:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.563 12:42:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:56.563 12:42:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.563 12:42:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:57.131 [2024-10-01 12:42:39.392239] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:57.131 [2024-10-01 12:42:39.392456] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:57.131 [2024-10-01 12:42:39.392621] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.700 12:42:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:57.700 12:42:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.700 12:42:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.700 12:42:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:57.700 12:42:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:57.700 12:42:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.700 12:42:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.700 12:42:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.700 12:42:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.700 "name": "raid_bdev1", 00:22:57.700 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:57.700 "strip_size_kb": 0, 00:22:57.700 "state": "online", 00:22:57.700 "raid_level": "raid1", 00:22:57.700 "superblock": false, 00:22:57.700 "num_base_bdevs": 2, 00:22:57.700 "num_base_bdevs_discovered": 2, 00:22:57.700 "num_base_bdevs_operational": 2, 00:22:57.700 "base_bdevs_list": [ 00:22:57.700 { 00:22:57.700 "name": "spare", 00:22:57.700 "uuid": "24e05ed9-08df-5dc4-bc0d-b5dd5d85ae8d", 00:22:57.700 "is_configured": true, 00:22:57.700 "data_offset": 0, 00:22:57.700 "data_size": 65536 00:22:57.700 }, 00:22:57.700 { 00:22:57.700 "name": "BaseBdev2", 00:22:57.700 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:57.700 "is_configured": true, 00:22:57.700 "data_offset": 0, 00:22:57.700 "data_size": 65536 00:22:57.700 } 00:22:57.700 ] 00:22:57.700 }' 00:22:57.700 12:42:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.700 12:42:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:57.700 12:42:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@660 -- # break 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.960 "name": "raid_bdev1", 00:22:57.960 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:57.960 "strip_size_kb": 0, 00:22:57.960 "state": "online", 00:22:57.960 "raid_level": "raid1", 00:22:57.960 "superblock": false, 00:22:57.960 "num_base_bdevs": 2, 00:22:57.960 "num_base_bdevs_discovered": 2, 00:22:57.960 "num_base_bdevs_operational": 2, 00:22:57.960 "base_bdevs_list": [ 00:22:57.960 { 00:22:57.960 "name": "spare", 00:22:57.960 "uuid": "24e05ed9-08df-5dc4-bc0d-b5dd5d85ae8d", 00:22:57.960 "is_configured": true, 00:22:57.960 "data_offset": 0, 00:22:57.960 "data_size": 65536 00:22:57.960 }, 00:22:57.960 { 00:22:57.960 "name": "BaseBdev2", 00:22:57.960 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:57.960 "is_configured": true, 00:22:57.960 "data_offset": 0, 00:22:57.960 "data_size": 65536 00:22:57.960 } 00:22:57.960 ] 00:22:57.960 }' 00:22:57.960 12:42:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:58.219 "name": "raid_bdev1", 00:22:58.219 "uuid": "6f4f3caa-a107-4448-8e1b-3108788c70c8", 00:22:58.219 "strip_size_kb": 0, 00:22:58.219 "state": "online", 00:22:58.219 "raid_level": "raid1", 00:22:58.219 "superblock": false, 00:22:58.219 "num_base_bdevs": 2, 00:22:58.219 "num_base_bdevs_discovered": 2, 00:22:58.219 "num_base_bdevs_operational": 2, 00:22:58.219 "base_bdevs_list": [ 00:22:58.219 { 00:22:58.219 "name": "spare", 00:22:58.219 "uuid": "24e05ed9-08df-5dc4-bc0d-b5dd5d85ae8d", 00:22:58.219 "is_configured": true, 00:22:58.219 "data_offset": 0, 00:22:58.219 "data_size": 65536 00:22:58.219 }, 00:22:58.219 { 00:22:58.219 "name": "BaseBdev2", 00:22:58.219 "uuid": "cd710cef-650d-4a68-b79b-6e83da2d8794", 00:22:58.219 "is_configured": true, 00:22:58.219 "data_offset": 0, 00:22:58.219 "data_size": 65536 00:22:58.219 } 00:22:58.219 ] 00:22:58.219 }' 00:22:58.219 12:42:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:58.219 12:42:40 -- common/autotest_common.sh@10 -- # set +x 00:22:58.787 12:42:41 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:59.046 [2024-10-01 12:42:41.432259] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.046 [2024-10-01 12:42:41.432446] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.046 [2024-10-01 12:42:41.432628] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.046 [2024-10-01 12:42:41.432790] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.046 [2024-10-01 12:42:41.432875] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:22:59.046 12:42:41 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:59.046 12:42:41 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.306 12:42:41 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:59.306 12:42:41 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:59.306 12:42:41 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:59.306 12:42:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:59.306 12:42:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:59.306 12:42:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:59.306 12:42:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:59.306 12:42:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:59.306 12:42:41 -- bdev/nbd_common.sh@12 -- # local i 00:22:59.306 12:42:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:59.306 12:42:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:59.306 12:42:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:59.306 /dev/nbd0 00:22:59.306 12:42:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:59.565 12:42:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:59.565 12:42:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:59.565 12:42:41 -- common/autotest_common.sh@857 -- # local i 00:22:59.565 12:42:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:59.565 12:42:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:59.565 12:42:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:59.565 12:42:41 -- common/autotest_common.sh@861 -- # break 00:22:59.565 12:42:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:59.565 12:42:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:59.565 12:42:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:59.565 1+0 records in 00:22:59.565 1+0 records out 00:22:59.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117417 s, 3.5 MB/s 00:22:59.565 12:42:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.565 12:42:41 -- common/autotest_common.sh@874 -- # size=4096 00:22:59.565 12:42:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.565 12:42:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:59.565 12:42:41 -- common/autotest_common.sh@877 -- # return 0 00:22:59.565 12:42:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:59.565 12:42:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:59.565 12:42:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:59.565 /dev/nbd1 00:22:59.565 12:42:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:59.565 12:42:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:59.565 12:42:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:59.565 12:42:42 -- common/autotest_common.sh@857 -- # local i 00:22:59.565 12:42:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:59.565 12:42:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:59.565 12:42:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:59.565 12:42:42 -- common/autotest_common.sh@861 -- # break 00:22:59.565 12:42:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:59.565 12:42:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:59.565 12:42:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:59.565 1+0 records in 00:22:59.565 1+0 records out 00:22:59.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477386 s, 8.6 MB/s 00:22:59.565 12:42:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.824 12:42:42 -- common/autotest_common.sh@874 -- # size=4096 00:22:59.824 12:42:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.824 12:42:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:59.825 12:42:42 -- common/autotest_common.sh@877 -- # return 0 00:22:59.825 12:42:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:59.825 12:42:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:59.825 12:42:42 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:59.825 12:42:42 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:59.825 12:42:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:59.825 12:42:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:59.825 12:42:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:59.825 12:42:42 -- bdev/nbd_common.sh@51 -- # local i 00:22:59.825 12:42:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:59.825 12:42:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:00.084 12:42:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:00.084 12:42:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:00.084 12:42:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:00.084 12:42:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:00.084 12:42:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.084 12:42:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:00.084 12:42:42 -- bdev/nbd_common.sh@41 -- # break 00:23:00.084 12:42:42 -- bdev/nbd_common.sh@45 -- # return 0 00:23:00.084 12:42:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:00.084 12:42:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:00.343 12:42:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:00.343 12:42:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:00.343 12:42:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:00.343 12:42:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:00.343 12:42:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.343 12:42:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:00.343 12:42:42 -- bdev/nbd_common.sh@41 -- # break 00:23:00.343 12:42:42 -- bdev/nbd_common.sh@45 -- # return 0 00:23:00.343 12:42:42 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:00.343 12:42:42 -- bdev/bdev_raid.sh@709 -- # killprocess 122729 00:23:00.343 12:42:42 -- common/autotest_common.sh@926 -- # '[' -z 122729 ']' 00:23:00.343 12:42:42 -- common/autotest_common.sh@930 -- # kill -0 122729 00:23:00.343 12:42:42 -- common/autotest_common.sh@931 -- # uname 00:23:00.343 12:42:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:00.343 12:42:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122729 00:23:00.343 12:42:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:00.343 12:42:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:00.343 12:42:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122729' 00:23:00.343 killing process with pid 122729 00:23:00.343 12:42:42 -- common/autotest_common.sh@945 -- # kill 122729 00:23:00.343 Received shutdown signal, test time was about 60.000000 seconds 00:23:00.343 00:23:00.343 Latency(us) 00:23:00.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.343 =================================================================================================================== 00:23:00.343 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:00.343 12:42:42 -- common/autotest_common.sh@950 -- # wait 122729 00:23:00.343 [2024-10-01 12:42:42.770789] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:00.602 [2024-10-01 12:42:43.081480] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:01.982 12:42:44 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:01.982 00:23:01.982 real 0m19.627s 00:23:01.982 user 0m25.138s 00:23:01.982 sys 0m3.785s 00:23:01.982 12:42:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:01.982 ************************************ 00:23:01.982 END TEST raid_rebuild_test 00:23:01.982 ************************************ 00:23:01.982 12:42:44 -- common/autotest_common.sh@10 -- # set +x 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:23:02.241 12:42:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:02.241 12:42:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:02.241 12:42:44 -- common/autotest_common.sh@10 -- # set +x 00:23:02.241 ************************************ 00:23:02.241 START TEST raid_rebuild_test_sb 00:23:02.241 ************************************ 00:23:02.241 12:42:44 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@544 -- # raid_pid=123248 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:02.241 12:42:44 -- bdev/bdev_raid.sh@545 -- # waitforlisten 123248 /var/tmp/spdk-raid.sock 00:23:02.241 12:42:44 -- common/autotest_common.sh@819 -- # '[' -z 123248 ']' 00:23:02.241 12:42:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:02.241 12:42:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:02.241 12:42:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:02.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:02.241 12:42:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:02.241 12:42:44 -- common/autotest_common.sh@10 -- # set +x 00:23:02.241 [2024-10-01 12:42:44.684223] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:02.241 [2024-10-01 12:42:44.685091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123248 ] 00:23:02.241 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:02.241 Zero copy mechanism will not be used. 00:23:02.501 [2024-10-01 12:42:44.855175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.760 [2024-10-01 12:42:45.073001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.019 [2024-10-01 12:42:45.333280] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:03.956 12:42:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:03.956 12:42:46 -- common/autotest_common.sh@852 -- # return 0 00:23:03.956 12:42:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:03.956 12:42:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:03.956 12:42:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:03.956 BaseBdev1_malloc 00:23:03.957 12:42:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:04.215 [2024-10-01 12:42:46.536083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:04.215 [2024-10-01 12:42:46.536404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.215 [2024-10-01 12:42:46.536483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:04.215 [2024-10-01 12:42:46.536611] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.215 [2024-10-01 12:42:46.539230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.215 [2024-10-01 12:42:46.539421] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:04.215 BaseBdev1 00:23:04.215 12:42:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:04.215 12:42:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:04.215 12:42:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:04.474 BaseBdev2_malloc 00:23:04.474 12:42:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:04.475 [2024-10-01 12:42:46.980684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:04.475 [2024-10-01 12:42:46.980948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.475 [2024-10-01 12:42:46.981080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:04.475 [2024-10-01 12:42:46.981238] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.475 [2024-10-01 12:42:46.983770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.475 [2024-10-01 12:42:46.983979] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:04.475 BaseBdev2 00:23:04.475 12:42:46 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:04.734 spare_malloc 00:23:04.734 12:42:47 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:04.992 spare_delay 00:23:04.992 12:42:47 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:05.251 [2024-10-01 12:42:47.555360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:05.251 [2024-10-01 12:42:47.555591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.251 [2024-10-01 12:42:47.555671] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:23:05.251 [2024-10-01 12:42:47.555808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.251 [2024-10-01 12:42:47.558337] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.251 [2024-10-01 12:42:47.558509] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:05.251 spare 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:05.251 [2024-10-01 12:42:47.739157] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:05.251 [2024-10-01 12:42:47.741432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:05.251 [2024-10-01 12:42:47.741757] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:23:05.251 [2024-10-01 12:42:47.741809] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:05.251 [2024-10-01 12:42:47.742009] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:05.251 [2024-10-01 12:42:47.742376] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:23:05.251 [2024-10-01 12:42:47.742579] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:23:05.251 [2024-10-01 12:42:47.742819] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.251 12:42:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.509 12:42:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:05.509 "name": "raid_bdev1", 00:23:05.509 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:05.509 "strip_size_kb": 0, 00:23:05.509 "state": "online", 00:23:05.509 "raid_level": "raid1", 00:23:05.509 "superblock": true, 00:23:05.510 "num_base_bdevs": 2, 00:23:05.510 "num_base_bdevs_discovered": 2, 00:23:05.510 "num_base_bdevs_operational": 2, 00:23:05.510 "base_bdevs_list": [ 00:23:05.510 { 00:23:05.510 "name": "BaseBdev1", 00:23:05.510 "uuid": "d4542dc8-4bf8-517a-a0e3-b8ec357d65ea", 00:23:05.510 "is_configured": true, 00:23:05.510 "data_offset": 2048, 00:23:05.510 "data_size": 63488 00:23:05.510 }, 00:23:05.510 { 00:23:05.510 "name": "BaseBdev2", 00:23:05.510 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:05.510 "is_configured": true, 00:23:05.510 "data_offset": 2048, 00:23:05.510 "data_size": 63488 00:23:05.510 } 00:23:05.510 ] 00:23:05.510 }' 00:23:05.510 12:42:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:05.510 12:42:47 -- common/autotest_common.sh@10 -- # set +x 00:23:06.077 12:42:48 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:06.077 12:42:48 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:06.077 [2024-10-01 12:42:48.582022] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:06.077 12:42:48 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:06.078 12:42:48 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.078 12:42:48 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:06.336 12:42:48 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:06.336 12:42:48 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:06.336 12:42:48 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:06.336 12:42:48 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:06.336 12:42:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:06.336 12:42:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:06.336 12:42:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:06.336 12:42:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:06.336 12:42:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:06.336 12:42:48 -- bdev/nbd_common.sh@12 -- # local i 00:23:06.336 12:42:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:06.336 12:42:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:06.336 12:42:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:06.600 [2024-10-01 12:42:48.953350] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:06.600 /dev/nbd0 00:23:06.600 12:42:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:06.600 12:42:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:06.600 12:42:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:06.600 12:42:48 -- common/autotest_common.sh@857 -- # local i 00:23:06.600 12:42:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:06.600 12:42:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:06.600 12:42:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:06.600 12:42:49 -- common/autotest_common.sh@861 -- # break 00:23:06.600 12:42:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:06.600 12:42:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:06.600 12:42:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.600 1+0 records in 00:23:06.600 1+0 records out 00:23:06.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613875 s, 6.7 MB/s 00:23:06.600 12:42:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.600 12:42:49 -- common/autotest_common.sh@874 -- # size=4096 00:23:06.600 12:42:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.600 12:42:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:06.600 12:42:49 -- common/autotest_common.sh@877 -- # return 0 00:23:06.600 12:42:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.600 12:42:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:06.600 12:42:49 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:23:06.600 12:42:49 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:23:06.600 12:42:49 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:10.787 63488+0 records in 00:23:10.787 63488+0 records out 00:23:10.787 32505856 bytes (33 MB, 31 MiB) copied, 4.12876 s, 7.9 MB/s 00:23:10.787 12:42:53 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:10.787 12:42:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:10.787 12:42:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:10.787 12:42:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:10.787 12:42:53 -- bdev/nbd_common.sh@51 -- # local i 00:23:10.787 12:42:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:10.787 12:42:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:11.046 [2024-10-01 12:42:53.355128] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.046 12:42:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:11.046 12:42:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:11.046 12:42:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:11.046 12:42:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:11.046 12:42:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:11.046 12:42:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:11.046 12:42:53 -- bdev/nbd_common.sh@41 -- # break 00:23:11.046 12:42:53 -- bdev/nbd_common.sh@45 -- # return 0 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:11.046 [2024-10-01 12:42:53.534381] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.046 12:42:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.305 12:42:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:11.305 "name": "raid_bdev1", 00:23:11.305 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:11.305 "strip_size_kb": 0, 00:23:11.305 "state": "online", 00:23:11.305 "raid_level": "raid1", 00:23:11.305 "superblock": true, 00:23:11.305 "num_base_bdevs": 2, 00:23:11.305 "num_base_bdevs_discovered": 1, 00:23:11.305 "num_base_bdevs_operational": 1, 00:23:11.305 "base_bdevs_list": [ 00:23:11.305 { 00:23:11.305 "name": null, 00:23:11.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.305 "is_configured": false, 00:23:11.305 "data_offset": 2048, 00:23:11.305 "data_size": 63488 00:23:11.305 }, 00:23:11.305 { 00:23:11.305 "name": "BaseBdev2", 00:23:11.305 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:11.305 "is_configured": true, 00:23:11.305 "data_offset": 2048, 00:23:11.305 "data_size": 63488 00:23:11.305 } 00:23:11.305 ] 00:23:11.305 }' 00:23:11.305 12:42:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:11.305 12:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:11.872 12:42:54 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:12.131 [2024-10-01 12:42:54.417038] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:12.131 [2024-10-01 12:42:54.417232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:12.131 [2024-10-01 12:42:54.433785] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80 00:23:12.131 [2024-10-01 12:42:54.436091] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:12.131 12:42:54 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:13.068 12:42:55 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.068 12:42:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:13.068 12:42:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:13.068 12:42:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:13.068 12:42:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:13.068 12:42:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.068 12:42:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.325 12:42:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:13.325 "name": "raid_bdev1", 00:23:13.325 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:13.325 "strip_size_kb": 0, 00:23:13.325 "state": "online", 00:23:13.325 "raid_level": "raid1", 00:23:13.325 "superblock": true, 00:23:13.325 "num_base_bdevs": 2, 00:23:13.325 "num_base_bdevs_discovered": 2, 00:23:13.325 "num_base_bdevs_operational": 2, 00:23:13.325 "process": { 00:23:13.325 "type": "rebuild", 00:23:13.325 "target": "spare", 00:23:13.325 "progress": { 00:23:13.325 "blocks": 22528, 00:23:13.325 "percent": 35 00:23:13.325 } 00:23:13.325 }, 00:23:13.325 "base_bdevs_list": [ 00:23:13.325 { 00:23:13.325 "name": "spare", 00:23:13.325 "uuid": "0ee3df0f-5088-566c-9778-9fb9c6dacb22", 00:23:13.325 "is_configured": true, 00:23:13.325 "data_offset": 2048, 00:23:13.325 "data_size": 63488 00:23:13.325 }, 00:23:13.325 { 00:23:13.325 "name": "BaseBdev2", 00:23:13.325 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:13.325 "is_configured": true, 00:23:13.325 "data_offset": 2048, 00:23:13.325 "data_size": 63488 00:23:13.325 } 00:23:13.325 ] 00:23:13.325 }' 00:23:13.325 12:42:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:13.325 12:42:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:13.325 12:42:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:13.325 12:42:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.325 12:42:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:13.583 [2024-10-01 12:42:55.895491] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:13.583 [2024-10-01 12:42:55.944114] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:13.583 [2024-10-01 12:42:55.944319] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.583 12:42:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:13.583 12:42:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:13.583 12:42:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:13.583 12:42:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:13.583 12:42:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:13.583 12:42:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:13.583 12:42:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.583 12:42:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.583 12:42:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.583 12:42:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.583 12:42:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.583 12:42:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.840 12:42:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:13.840 "name": "raid_bdev1", 00:23:13.840 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:13.840 "strip_size_kb": 0, 00:23:13.840 "state": "online", 00:23:13.840 "raid_level": "raid1", 00:23:13.840 "superblock": true, 00:23:13.840 "num_base_bdevs": 2, 00:23:13.840 "num_base_bdevs_discovered": 1, 00:23:13.840 "num_base_bdevs_operational": 1, 00:23:13.840 "base_bdevs_list": [ 00:23:13.840 { 00:23:13.840 "name": null, 00:23:13.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.840 "is_configured": false, 00:23:13.840 "data_offset": 2048, 00:23:13.840 "data_size": 63488 00:23:13.840 }, 00:23:13.840 { 00:23:13.840 "name": "BaseBdev2", 00:23:13.840 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:13.840 "is_configured": true, 00:23:13.840 "data_offset": 2048, 00:23:13.840 "data_size": 63488 00:23:13.840 } 00:23:13.841 ] 00:23:13.841 }' 00:23:13.841 12:42:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:13.841 12:42:56 -- common/autotest_common.sh@10 -- # set +x 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:14.444 "name": "raid_bdev1", 00:23:14.444 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:14.444 "strip_size_kb": 0, 00:23:14.444 "state": "online", 00:23:14.444 "raid_level": "raid1", 00:23:14.444 "superblock": true, 00:23:14.444 "num_base_bdevs": 2, 00:23:14.444 "num_base_bdevs_discovered": 1, 00:23:14.444 "num_base_bdevs_operational": 1, 00:23:14.444 "base_bdevs_list": [ 00:23:14.444 { 00:23:14.444 "name": null, 00:23:14.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.444 "is_configured": false, 00:23:14.444 "data_offset": 2048, 00:23:14.444 "data_size": 63488 00:23:14.444 }, 00:23:14.444 { 00:23:14.444 "name": "BaseBdev2", 00:23:14.444 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:14.444 "is_configured": true, 00:23:14.444 "data_offset": 2048, 00:23:14.444 "data_size": 63488 00:23:14.444 } 00:23:14.444 ] 00:23:14.444 }' 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:14.444 12:42:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:14.702 12:42:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:14.702 12:42:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:14.702 [2024-10-01 12:42:57.140439] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:14.702 [2024-10-01 12:42:57.140612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:14.702 [2024-10-01 12:42:57.157730] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020 00:23:14.702 [2024-10-01 12:42:57.159917] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:14.702 12:42:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:16.080 "name": "raid_bdev1", 00:23:16.080 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:16.080 "strip_size_kb": 0, 00:23:16.080 "state": "online", 00:23:16.080 "raid_level": "raid1", 00:23:16.080 "superblock": true, 00:23:16.080 "num_base_bdevs": 2, 00:23:16.080 "num_base_bdevs_discovered": 2, 00:23:16.080 "num_base_bdevs_operational": 2, 00:23:16.080 "process": { 00:23:16.080 "type": "rebuild", 00:23:16.080 "target": "spare", 00:23:16.080 "progress": { 00:23:16.080 "blocks": 22528, 00:23:16.080 "percent": 35 00:23:16.080 } 00:23:16.080 }, 00:23:16.080 "base_bdevs_list": [ 00:23:16.080 { 00:23:16.080 "name": "spare", 00:23:16.080 "uuid": "0ee3df0f-5088-566c-9778-9fb9c6dacb22", 00:23:16.080 "is_configured": true, 00:23:16.080 "data_offset": 2048, 00:23:16.080 "data_size": 63488 00:23:16.080 }, 00:23:16.080 { 00:23:16.080 "name": "BaseBdev2", 00:23:16.080 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:16.080 "is_configured": true, 00:23:16.080 "data_offset": 2048, 00:23:16.080 "data_size": 63488 00:23:16.080 } 00:23:16.080 ] 00:23:16.080 }' 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:16.080 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@657 -- # local timeout=365 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.080 12:42:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.339 12:42:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:16.339 "name": "raid_bdev1", 00:23:16.339 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:16.339 "strip_size_kb": 0, 00:23:16.339 "state": "online", 00:23:16.339 "raid_level": "raid1", 00:23:16.339 "superblock": true, 00:23:16.339 "num_base_bdevs": 2, 00:23:16.339 "num_base_bdevs_discovered": 2, 00:23:16.339 "num_base_bdevs_operational": 2, 00:23:16.339 "process": { 00:23:16.339 "type": "rebuild", 00:23:16.339 "target": "spare", 00:23:16.339 "progress": { 00:23:16.339 "blocks": 28672, 00:23:16.339 "percent": 45 00:23:16.339 } 00:23:16.339 }, 00:23:16.339 "base_bdevs_list": [ 00:23:16.339 { 00:23:16.339 "name": "spare", 00:23:16.339 "uuid": "0ee3df0f-5088-566c-9778-9fb9c6dacb22", 00:23:16.339 "is_configured": true, 00:23:16.339 "data_offset": 2048, 00:23:16.339 "data_size": 63488 00:23:16.339 }, 00:23:16.339 { 00:23:16.339 "name": "BaseBdev2", 00:23:16.339 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:16.339 "is_configured": true, 00:23:16.339 "data_offset": 2048, 00:23:16.339 "data_size": 63488 00:23:16.339 } 00:23:16.339 ] 00:23:16.339 }' 00:23:16.339 12:42:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:16.339 12:42:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:16.339 12:42:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:16.339 12:42:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:16.339 12:42:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:17.278 12:42:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:17.278 12:42:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.278 12:42:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:17.278 12:42:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:17.278 12:42:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:17.278 12:42:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:17.278 12:42:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.278 12:42:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.538 12:42:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:17.538 "name": "raid_bdev1", 00:23:17.538 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:17.538 "strip_size_kb": 0, 00:23:17.538 "state": "online", 00:23:17.538 "raid_level": "raid1", 00:23:17.538 "superblock": true, 00:23:17.538 "num_base_bdevs": 2, 00:23:17.538 "num_base_bdevs_discovered": 2, 00:23:17.538 "num_base_bdevs_operational": 2, 00:23:17.538 "process": { 00:23:17.538 "type": "rebuild", 00:23:17.538 "target": "spare", 00:23:17.538 "progress": { 00:23:17.538 "blocks": 55296, 00:23:17.538 "percent": 87 00:23:17.538 } 00:23:17.538 }, 00:23:17.538 "base_bdevs_list": [ 00:23:17.538 { 00:23:17.538 "name": "spare", 00:23:17.538 "uuid": "0ee3df0f-5088-566c-9778-9fb9c6dacb22", 00:23:17.538 "is_configured": true, 00:23:17.538 "data_offset": 2048, 00:23:17.538 "data_size": 63488 00:23:17.538 }, 00:23:17.538 { 00:23:17.538 "name": "BaseBdev2", 00:23:17.538 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:17.538 "is_configured": true, 00:23:17.538 "data_offset": 2048, 00:23:17.538 "data_size": 63488 00:23:17.538 } 00:23:17.538 ] 00:23:17.538 }' 00:23:17.538 12:42:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:17.538 12:42:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.538 12:42:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:17.538 12:42:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.538 12:42:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:17.798 [2024-10-01 12:43:00.275816] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:17.798 [2024-10-01 12:43:00.276115] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:17.798 [2024-10-01 12:43:00.276360] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:18.736 "name": "raid_bdev1", 00:23:18.736 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:18.736 "strip_size_kb": 0, 00:23:18.736 "state": "online", 00:23:18.736 "raid_level": "raid1", 00:23:18.736 "superblock": true, 00:23:18.736 "num_base_bdevs": 2, 00:23:18.736 "num_base_bdevs_discovered": 2, 00:23:18.736 "num_base_bdevs_operational": 2, 00:23:18.736 "base_bdevs_list": [ 00:23:18.736 { 00:23:18.736 "name": "spare", 00:23:18.736 "uuid": "0ee3df0f-5088-566c-9778-9fb9c6dacb22", 00:23:18.736 "is_configured": true, 00:23:18.736 "data_offset": 2048, 00:23:18.736 "data_size": 63488 00:23:18.736 }, 00:23:18.736 { 00:23:18.736 "name": "BaseBdev2", 00:23:18.736 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:18.736 "is_configured": true, 00:23:18.736 "data_offset": 2048, 00:23:18.736 "data_size": 63488 00:23:18.736 } 00:23:18.736 ] 00:23:18.736 }' 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@660 -- # break 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.736 12:43:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.995 12:43:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:18.995 "name": "raid_bdev1", 00:23:18.995 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:18.995 "strip_size_kb": 0, 00:23:18.995 "state": "online", 00:23:18.995 "raid_level": "raid1", 00:23:18.995 "superblock": true, 00:23:18.995 "num_base_bdevs": 2, 00:23:18.995 "num_base_bdevs_discovered": 2, 00:23:18.995 "num_base_bdevs_operational": 2, 00:23:18.995 "base_bdevs_list": [ 00:23:18.995 { 00:23:18.995 "name": "spare", 00:23:18.995 "uuid": "0ee3df0f-5088-566c-9778-9fb9c6dacb22", 00:23:18.995 "is_configured": true, 00:23:18.995 "data_offset": 2048, 00:23:18.995 "data_size": 63488 00:23:18.995 }, 00:23:18.995 { 00:23:18.995 "name": "BaseBdev2", 00:23:18.995 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:18.995 "is_configured": true, 00:23:18.995 "data_offset": 2048, 00:23:18.995 "data_size": 63488 00:23:18.995 } 00:23:18.995 ] 00:23:18.995 }' 00:23:18.995 12:43:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:18.995 12:43:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:18.995 12:43:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.255 "name": "raid_bdev1", 00:23:19.255 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:19.255 "strip_size_kb": 0, 00:23:19.255 "state": "online", 00:23:19.255 "raid_level": "raid1", 00:23:19.255 "superblock": true, 00:23:19.255 "num_base_bdevs": 2, 00:23:19.255 "num_base_bdevs_discovered": 2, 00:23:19.255 "num_base_bdevs_operational": 2, 00:23:19.255 "base_bdevs_list": [ 00:23:19.255 { 00:23:19.255 "name": "spare", 00:23:19.255 "uuid": "0ee3df0f-5088-566c-9778-9fb9c6dacb22", 00:23:19.255 "is_configured": true, 00:23:19.255 "data_offset": 2048, 00:23:19.255 "data_size": 63488 00:23:19.255 }, 00:23:19.255 { 00:23:19.255 "name": "BaseBdev2", 00:23:19.255 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:19.255 "is_configured": true, 00:23:19.255 "data_offset": 2048, 00:23:19.255 "data_size": 63488 00:23:19.255 } 00:23:19.255 ] 00:23:19.255 }' 00:23:19.255 12:43:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.255 12:43:01 -- common/autotest_common.sh@10 -- # set +x 00:23:19.823 12:43:02 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:20.081 [2024-10-01 12:43:02.383972] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:20.081 [2024-10-01 12:43:02.384122] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:20.081 [2024-10-01 12:43:02.384366] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.081 [2024-10-01 12:43:02.384472] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.081 [2024-10-01 12:43:02.384672] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:23:20.081 12:43:02 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:20.081 12:43:02 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.081 12:43:02 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:20.081 12:43:02 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:20.081 12:43:02 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:20.081 12:43:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:20.081 12:43:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:20.081 12:43:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:20.081 12:43:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:20.081 12:43:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:20.081 12:43:02 -- bdev/nbd_common.sh@12 -- # local i 00:23:20.081 12:43:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:20.081 12:43:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:20.081 12:43:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:20.340 /dev/nbd0 00:23:20.340 12:43:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:20.340 12:43:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:20.340 12:43:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:20.340 12:43:02 -- common/autotest_common.sh@857 -- # local i 00:23:20.340 12:43:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:20.340 12:43:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:20.340 12:43:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:20.340 12:43:02 -- common/autotest_common.sh@861 -- # break 00:23:20.340 12:43:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:20.340 12:43:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:20.340 12:43:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:20.340 1+0 records in 00:23:20.340 1+0 records out 00:23:20.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624453 s, 6.6 MB/s 00:23:20.340 12:43:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:20.340 12:43:02 -- common/autotest_common.sh@874 -- # size=4096 00:23:20.340 12:43:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:20.340 12:43:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:20.340 12:43:02 -- common/autotest_common.sh@877 -- # return 0 00:23:20.340 12:43:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:20.340 12:43:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:20.340 12:43:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:20.599 /dev/nbd1 00:23:20.599 12:43:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:20.599 12:43:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:20.599 12:43:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:20.599 12:43:03 -- common/autotest_common.sh@857 -- # local i 00:23:20.599 12:43:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:20.599 12:43:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:20.599 12:43:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:20.599 12:43:03 -- common/autotest_common.sh@861 -- # break 00:23:20.599 12:43:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:20.599 12:43:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:20.599 12:43:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:20.599 1+0 records in 00:23:20.599 1+0 records out 00:23:20.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064819 s, 6.3 MB/s 00:23:20.599 12:43:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:20.599 12:43:03 -- common/autotest_common.sh@874 -- # size=4096 00:23:20.599 12:43:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:20.599 12:43:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:20.599 12:43:03 -- common/autotest_common.sh@877 -- # return 0 00:23:20.599 12:43:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:20.599 12:43:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:20.599 12:43:03 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:20.858 12:43:03 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:20.858 12:43:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:20.858 12:43:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:20.858 12:43:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:20.858 12:43:03 -- bdev/nbd_common.sh@51 -- # local i 00:23:20.858 12:43:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:20.858 12:43:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:21.116 12:43:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:21.116 12:43:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:21.116 12:43:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:21.116 12:43:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:21.117 12:43:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:21.117 12:43:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:21.117 12:43:03 -- bdev/nbd_common.sh@41 -- # break 00:23:21.117 12:43:03 -- bdev/nbd_common.sh@45 -- # return 0 00:23:21.117 12:43:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:21.117 12:43:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:21.376 12:43:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:21.376 12:43:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:21.376 12:43:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:21.376 12:43:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:21.376 12:43:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:21.376 12:43:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:21.376 12:43:03 -- bdev/nbd_common.sh@41 -- # break 00:23:21.376 12:43:03 -- bdev/nbd_common.sh@45 -- # return 0 00:23:21.376 12:43:03 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:21.376 12:43:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:21.376 12:43:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:21.376 12:43:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:21.376 12:43:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:21.634 [2024-10-01 12:43:04.047268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:21.634 [2024-10-01 12:43:04.047359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.634 [2024-10-01 12:43:04.047413] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:21.634 [2024-10-01 12:43:04.047443] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.634 [2024-10-01 12:43:04.049919] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.634 [2024-10-01 12:43:04.050001] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:21.634 [2024-10-01 12:43:04.050149] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:21.634 [2024-10-01 12:43:04.050265] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.634 BaseBdev1 00:23:21.634 12:43:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:21.634 12:43:04 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:23:21.634 12:43:04 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:23:21.894 12:43:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:21.894 [2024-10-01 12:43:04.402755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:21.894 [2024-10-01 12:43:04.402830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.894 [2024-10-01 12:43:04.402866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:21.894 [2024-10-01 12:43:04.402896] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.894 [2024-10-01 12:43:04.403292] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.894 [2024-10-01 12:43:04.403351] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:21.894 [2024-10-01 12:43:04.403458] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:23:21.894 [2024-10-01 12:43:04.403469] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:23:21.894 [2024-10-01 12:43:04.403477] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:21.894 [2024-10-01 12:43:04.403494] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:23:21.894 [2024-10-01 12:43:04.403553] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:21.894 BaseBdev2 00:23:22.153 12:43:04 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:22.153 12:43:04 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:22.412 [2024-10-01 12:43:04.770295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:22.412 [2024-10-01 12:43:04.770349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.412 [2024-10-01 12:43:04.770381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:22.412 [2024-10-01 12:43:04.770401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.412 [2024-10-01 12:43:04.770790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.412 [2024-10-01 12:43:04.770837] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:22.412 [2024-10-01 12:43:04.770928] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:22.412 [2024-10-01 12:43:04.770964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:22.412 spare 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.412 12:43:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.412 [2024-10-01 12:43:04.870888] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:23:22.413 [2024-10-01 12:43:04.870907] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:22.413 [2024-10-01 12:43:04.871015] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:23:22.413 [2024-10-01 12:43:04.871358] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:23:22.413 [2024-10-01 12:43:04.871378] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:23:22.413 [2024-10-01 12:43:04.871496] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.671 12:43:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.671 "name": "raid_bdev1", 00:23:22.671 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:22.671 "strip_size_kb": 0, 00:23:22.671 "state": "online", 00:23:22.671 "raid_level": "raid1", 00:23:22.671 "superblock": true, 00:23:22.671 "num_base_bdevs": 2, 00:23:22.671 "num_base_bdevs_discovered": 2, 00:23:22.671 "num_base_bdevs_operational": 2, 00:23:22.671 "base_bdevs_list": [ 00:23:22.671 { 00:23:22.671 "name": "spare", 00:23:22.671 "uuid": "0ee3df0f-5088-566c-9778-9fb9c6dacb22", 00:23:22.671 "is_configured": true, 00:23:22.671 "data_offset": 2048, 00:23:22.671 "data_size": 63488 00:23:22.671 }, 00:23:22.671 { 00:23:22.671 "name": "BaseBdev2", 00:23:22.671 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:22.671 "is_configured": true, 00:23:22.671 "data_offset": 2048, 00:23:22.671 "data_size": 63488 00:23:22.671 } 00:23:22.671 ] 00:23:22.671 }' 00:23:22.671 12:43:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.671 12:43:04 -- common/autotest_common.sh@10 -- # set +x 00:23:23.239 12:43:05 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:23.239 12:43:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:23.239 12:43:05 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:23.239 12:43:05 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:23.240 12:43:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:23.240 12:43:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.240 12:43:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.240 12:43:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:23.240 "name": "raid_bdev1", 00:23:23.240 "uuid": "817826f0-3d00-4006-bc76-76b4e76b0b43", 00:23:23.240 "strip_size_kb": 0, 00:23:23.240 "state": "online", 00:23:23.240 "raid_level": "raid1", 00:23:23.240 "superblock": true, 00:23:23.240 "num_base_bdevs": 2, 00:23:23.240 "num_base_bdevs_discovered": 2, 00:23:23.240 "num_base_bdevs_operational": 2, 00:23:23.240 "base_bdevs_list": [ 00:23:23.240 { 00:23:23.240 "name": "spare", 00:23:23.240 "uuid": "0ee3df0f-5088-566c-9778-9fb9c6dacb22", 00:23:23.240 "is_configured": true, 00:23:23.240 "data_offset": 2048, 00:23:23.240 "data_size": 63488 00:23:23.240 }, 00:23:23.240 { 00:23:23.240 "name": "BaseBdev2", 00:23:23.240 "uuid": "a250fb17-f338-5c7c-bffc-166ab1f4a157", 00:23:23.240 "is_configured": true, 00:23:23.240 "data_offset": 2048, 00:23:23.240 "data_size": 63488 00:23:23.240 } 00:23:23.240 ] 00:23:23.240 }' 00:23:23.240 12:43:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:23.240 12:43:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:23.240 12:43:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:23.240 12:43:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:23.240 12:43:05 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.240 12:43:05 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:23.499 12:43:05 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.499 12:43:05 -- bdev/bdev_raid.sh@709 -- # killprocess 123248 00:23:23.499 12:43:05 -- common/autotest_common.sh@926 -- # '[' -z 123248 ']' 00:23:23.499 12:43:05 -- common/autotest_common.sh@930 -- # kill -0 123248 00:23:23.499 12:43:05 -- common/autotest_common.sh@931 -- # uname 00:23:23.499 12:43:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:23.499 12:43:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123248 00:23:23.499 12:43:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:23.499 12:43:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:23.499 12:43:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123248' 00:23:23.499 killing process with pid 123248 00:23:23.499 12:43:05 -- common/autotest_common.sh@945 -- # kill 123248 00:23:23.499 Received shutdown signal, test time was about 60.000000 seconds 00:23:23.499 00:23:23.499 Latency(us) 00:23:23.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.499 =================================================================================================================== 00:23:23.499 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.499 [2024-10-01 12:43:05.966361] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.499 [2024-10-01 12:43:05.966443] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.499 [2024-10-01 12:43:05.966499] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.499 [2024-10-01 12:43:05.966508] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:23:23.499 12:43:05 -- common/autotest_common.sh@950 -- # wait 123248 00:23:23.759 [2024-10-01 12:43:06.272763] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:25.667 00:23:25.667 real 0m23.084s 00:23:25.667 user 0m30.732s 00:23:25.667 sys 0m4.508s 00:23:25.667 12:43:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.667 12:43:07 -- common/autotest_common.sh@10 -- # set +x 00:23:25.667 ************************************ 00:23:25.667 END TEST raid_rebuild_test_sb 00:23:25.667 ************************************ 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:23:25.667 12:43:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:25.667 12:43:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:25.667 12:43:07 -- common/autotest_common.sh@10 -- # set +x 00:23:25.667 ************************************ 00:23:25.667 START TEST raid_rebuild_test_io 00:23:25.667 ************************************ 00:23:25.667 12:43:07 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:25.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@544 -- # raid_pid=123861 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@545 -- # waitforlisten 123861 /var/tmp/spdk-raid.sock 00:23:25.667 12:43:07 -- common/autotest_common.sh@819 -- # '[' -z 123861 ']' 00:23:25.667 12:43:07 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:25.667 12:43:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:25.667 12:43:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:25.667 12:43:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:25.667 12:43:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:25.667 12:43:07 -- common/autotest_common.sh@10 -- # set +x 00:23:25.667 [2024-10-01 12:43:07.841867] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:25.667 [2024-10-01 12:43:07.842017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123861 ] 00:23:25.667 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:25.667 Zero copy mechanism will not be used. 00:23:25.667 [2024-10-01 12:43:08.012340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.927 [2024-10-01 12:43:08.244119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.186 [2024-10-01 12:43:08.504538] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.122 12:43:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:27.122 12:43:09 -- common/autotest_common.sh@852 -- # return 0 00:23:27.122 12:43:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:27.122 12:43:09 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:27.122 12:43:09 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:27.122 BaseBdev1 00:23:27.122 12:43:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:27.122 12:43:09 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:27.122 12:43:09 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:27.381 BaseBdev2 00:23:27.381 12:43:09 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:27.639 spare_malloc 00:23:27.639 12:43:10 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:27.898 spare_delay 00:23:27.898 12:43:10 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:27.898 [2024-10-01 12:43:10.364732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:27.898 [2024-10-01 12:43:10.364844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.898 [2024-10-01 12:43:10.364880] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:23:27.898 [2024-10-01 12:43:10.364934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.898 [2024-10-01 12:43:10.367520] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.898 [2024-10-01 12:43:10.367580] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:27.898 spare 00:23:27.898 12:43:10 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:28.157 [2024-10-01 12:43:10.540523] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.157 [2024-10-01 12:43:10.542539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.157 [2024-10-01 12:43:10.542634] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:23:28.157 [2024-10-01 12:43:10.542645] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:28.157 [2024-10-01 12:43:10.542771] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:23:28.157 [2024-10-01 12:43:10.543071] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:23:28.157 [2024-10-01 12:43:10.543092] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:23:28.157 [2024-10-01 12:43:10.543237] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.157 12:43:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.416 12:43:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.416 "name": "raid_bdev1", 00:23:28.416 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:28.416 "strip_size_kb": 0, 00:23:28.416 "state": "online", 00:23:28.416 "raid_level": "raid1", 00:23:28.416 "superblock": false, 00:23:28.416 "num_base_bdevs": 2, 00:23:28.416 "num_base_bdevs_discovered": 2, 00:23:28.416 "num_base_bdevs_operational": 2, 00:23:28.416 "base_bdevs_list": [ 00:23:28.416 { 00:23:28.416 "name": "BaseBdev1", 00:23:28.416 "uuid": "2a16dc88-c3e4-4b33-b06f-8c407bedad6b", 00:23:28.416 "is_configured": true, 00:23:28.416 "data_offset": 0, 00:23:28.416 "data_size": 65536 00:23:28.416 }, 00:23:28.416 { 00:23:28.416 "name": "BaseBdev2", 00:23:28.416 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:28.416 "is_configured": true, 00:23:28.416 "data_offset": 0, 00:23:28.416 "data_size": 65536 00:23:28.416 } 00:23:28.416 ] 00:23:28.416 }' 00:23:28.416 12:43:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.416 12:43:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.985 12:43:11 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:28.985 12:43:11 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:28.985 [2024-10-01 12:43:11.407417] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.985 12:43:11 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:23:28.985 12:43:11 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.985 12:43:11 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:29.244 12:43:11 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:29.244 12:43:11 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:29.244 12:43:11 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:29.244 12:43:11 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:29.244 [2024-10-01 12:43:11.704249] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:29.244 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:29.244 Zero copy mechanism will not be used. 00:23:29.244 Running I/O for 60 seconds... 00:23:29.244 [2024-10-01 12:43:11.776074] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:29.504 [2024-10-01 12:43:11.786370] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.504 12:43:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.504 12:43:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.504 "name": "raid_bdev1", 00:23:29.504 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:29.505 "strip_size_kb": 0, 00:23:29.505 "state": "online", 00:23:29.505 "raid_level": "raid1", 00:23:29.505 "superblock": false, 00:23:29.505 "num_base_bdevs": 2, 00:23:29.505 "num_base_bdevs_discovered": 1, 00:23:29.505 "num_base_bdevs_operational": 1, 00:23:29.505 "base_bdevs_list": [ 00:23:29.505 { 00:23:29.505 "name": null, 00:23:29.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.505 "is_configured": false, 00:23:29.505 "data_offset": 0, 00:23:29.505 "data_size": 65536 00:23:29.505 }, 00:23:29.505 { 00:23:29.505 "name": "BaseBdev2", 00:23:29.505 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:29.505 "is_configured": true, 00:23:29.505 "data_offset": 0, 00:23:29.505 "data_size": 65536 00:23:29.505 } 00:23:29.505 ] 00:23:29.505 }' 00:23:29.505 12:43:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.505 12:43:12 -- common/autotest_common.sh@10 -- # set +x 00:23:30.073 12:43:12 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:30.330 [2024-10-01 12:43:12.705640] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:30.330 [2024-10-01 12:43:12.705711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:30.330 [2024-10-01 12:43:12.759005] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:30.330 [2024-10-01 12:43:12.761221] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:30.330 12:43:12 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:30.587 [2024-10-01 12:43:12.873567] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:30.587 [2024-10-01 12:43:12.873987] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:30.587 [2024-10-01 12:43:13.081697] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:30.587 [2024-10-01 12:43:13.081898] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:31.153 [2024-10-01 12:43:13.398912] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:31.153 [2024-10-01 12:43:13.621452] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:31.153 [2024-10-01 12:43:13.621693] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:31.411 12:43:13 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.411 12:43:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:31.411 12:43:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:31.411 12:43:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:31.411 12:43:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:31.411 12:43:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.411 12:43:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.411 [2024-10-01 12:43:13.938675] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:31.411 [2024-10-01 12:43:13.939032] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:31.668 12:43:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:31.668 "name": "raid_bdev1", 00:23:31.668 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:31.668 "strip_size_kb": 0, 00:23:31.668 "state": "online", 00:23:31.668 "raid_level": "raid1", 00:23:31.668 "superblock": false, 00:23:31.668 "num_base_bdevs": 2, 00:23:31.668 "num_base_bdevs_discovered": 2, 00:23:31.668 "num_base_bdevs_operational": 2, 00:23:31.668 "process": { 00:23:31.668 "type": "rebuild", 00:23:31.668 "target": "spare", 00:23:31.668 "progress": { 00:23:31.668 "blocks": 12288, 00:23:31.668 "percent": 18 00:23:31.668 } 00:23:31.668 }, 00:23:31.668 "base_bdevs_list": [ 00:23:31.668 { 00:23:31.668 "name": "spare", 00:23:31.668 "uuid": "5a1d9f0c-d644-5675-869c-bb311fe6e503", 00:23:31.668 "is_configured": true, 00:23:31.668 "data_offset": 0, 00:23:31.668 "data_size": 65536 00:23:31.668 }, 00:23:31.668 { 00:23:31.668 "name": "BaseBdev2", 00:23:31.668 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:31.668 "is_configured": true, 00:23:31.668 "data_offset": 0, 00:23:31.668 "data_size": 65536 00:23:31.668 } 00:23:31.668 ] 00:23:31.669 }' 00:23:31.669 12:43:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:31.669 12:43:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.669 12:43:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:31.669 12:43:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.669 12:43:14 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:31.669 [2024-10-01 12:43:14.163377] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:31.669 [2024-10-01 12:43:14.163550] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:31.926 [2024-10-01 12:43:14.204020] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:31.926 [2024-10-01 12:43:14.304874] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:31.926 [2024-10-01 12:43:14.311792] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.926 [2024-10-01 12:43:14.351929] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.926 12:43:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.220 12:43:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:32.220 "name": "raid_bdev1", 00:23:32.220 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:32.220 "strip_size_kb": 0, 00:23:32.220 "state": "online", 00:23:32.220 "raid_level": "raid1", 00:23:32.220 "superblock": false, 00:23:32.220 "num_base_bdevs": 2, 00:23:32.220 "num_base_bdevs_discovered": 1, 00:23:32.220 "num_base_bdevs_operational": 1, 00:23:32.220 "base_bdevs_list": [ 00:23:32.220 { 00:23:32.220 "name": null, 00:23:32.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.220 "is_configured": false, 00:23:32.220 "data_offset": 0, 00:23:32.220 "data_size": 65536 00:23:32.220 }, 00:23:32.220 { 00:23:32.220 "name": "BaseBdev2", 00:23:32.220 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:32.220 "is_configured": true, 00:23:32.220 "data_offset": 0, 00:23:32.220 "data_size": 65536 00:23:32.220 } 00:23:32.220 ] 00:23:32.220 }' 00:23:32.220 12:43:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:32.220 12:43:14 -- common/autotest_common.sh@10 -- # set +x 00:23:32.789 12:43:15 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:32.789 12:43:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:32.789 12:43:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:32.789 12:43:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:32.789 12:43:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:32.789 12:43:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.789 12:43:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.789 12:43:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:32.789 "name": "raid_bdev1", 00:23:32.789 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:32.789 "strip_size_kb": 0, 00:23:32.789 "state": "online", 00:23:32.789 "raid_level": "raid1", 00:23:32.789 "superblock": false, 00:23:32.789 "num_base_bdevs": 2, 00:23:32.789 "num_base_bdevs_discovered": 1, 00:23:32.789 "num_base_bdevs_operational": 1, 00:23:32.789 "base_bdevs_list": [ 00:23:32.789 { 00:23:32.789 "name": null, 00:23:32.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.789 "is_configured": false, 00:23:32.789 "data_offset": 0, 00:23:32.789 "data_size": 65536 00:23:32.789 }, 00:23:32.789 { 00:23:32.789 "name": "BaseBdev2", 00:23:32.789 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:32.789 "is_configured": true, 00:23:32.789 "data_offset": 0, 00:23:32.789 "data_size": 65536 00:23:32.789 } 00:23:32.789 ] 00:23:32.789 }' 00:23:32.789 12:43:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:33.048 12:43:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:33.048 12:43:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:33.048 12:43:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:33.048 12:43:15 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:33.048 [2024-10-01 12:43:15.553632] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:33.048 [2024-10-01 12:43:15.553705] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:33.307 [2024-10-01 12:43:15.601342] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:33.307 [2024-10-01 12:43:15.603450] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:33.307 12:43:15 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:33.307 [2024-10-01 12:43:15.716092] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:33.307 [2024-10-01 12:43:15.716469] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:33.567 [2024-10-01 12:43:15.918053] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:33.567 [2024-10-01 12:43:15.918258] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:34.135 [2024-10-01 12:43:16.569502] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:34.135 [2024-10-01 12:43:16.569910] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:34.135 12:43:16 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.135 12:43:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:34.135 12:43:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:34.135 12:43:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:34.135 12:43:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:34.135 12:43:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.135 12:43:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.392 [2024-10-01 12:43:16.693444] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:34.392 "name": "raid_bdev1", 00:23:34.392 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:34.392 "strip_size_kb": 0, 00:23:34.392 "state": "online", 00:23:34.392 "raid_level": "raid1", 00:23:34.392 "superblock": false, 00:23:34.392 "num_base_bdevs": 2, 00:23:34.392 "num_base_bdevs_discovered": 2, 00:23:34.392 "num_base_bdevs_operational": 2, 00:23:34.392 "process": { 00:23:34.392 "type": "rebuild", 00:23:34.392 "target": "spare", 00:23:34.392 "progress": { 00:23:34.392 "blocks": 16384, 00:23:34.392 "percent": 25 00:23:34.392 } 00:23:34.392 }, 00:23:34.392 "base_bdevs_list": [ 00:23:34.392 { 00:23:34.392 "name": "spare", 00:23:34.392 "uuid": "5a1d9f0c-d644-5675-869c-bb311fe6e503", 00:23:34.392 "is_configured": true, 00:23:34.392 "data_offset": 0, 00:23:34.392 "data_size": 65536 00:23:34.392 }, 00:23:34.392 { 00:23:34.392 "name": "BaseBdev2", 00:23:34.392 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:34.392 "is_configured": true, 00:23:34.392 "data_offset": 0, 00:23:34.392 "data_size": 65536 00:23:34.392 } 00:23:34.392 ] 00:23:34.392 }' 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@657 -- # local timeout=383 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.392 12:43:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.650 [2024-10-01 12:43:17.002365] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:34.650 12:43:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:34.650 "name": "raid_bdev1", 00:23:34.650 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:34.650 "strip_size_kb": 0, 00:23:34.650 "state": "online", 00:23:34.650 "raid_level": "raid1", 00:23:34.650 "superblock": false, 00:23:34.650 "num_base_bdevs": 2, 00:23:34.650 "num_base_bdevs_discovered": 2, 00:23:34.650 "num_base_bdevs_operational": 2, 00:23:34.650 "process": { 00:23:34.650 "type": "rebuild", 00:23:34.650 "target": "spare", 00:23:34.650 "progress": { 00:23:34.650 "blocks": 20480, 00:23:34.650 "percent": 31 00:23:34.650 } 00:23:34.650 }, 00:23:34.650 "base_bdevs_list": [ 00:23:34.650 { 00:23:34.650 "name": "spare", 00:23:34.650 "uuid": "5a1d9f0c-d644-5675-869c-bb311fe6e503", 00:23:34.650 "is_configured": true, 00:23:34.650 "data_offset": 0, 00:23:34.650 "data_size": 65536 00:23:34.650 }, 00:23:34.650 { 00:23:34.650 "name": "BaseBdev2", 00:23:34.650 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:34.650 "is_configured": true, 00:23:34.650 "data_offset": 0, 00:23:34.650 "data_size": 65536 00:23:34.650 } 00:23:34.650 ] 00:23:34.650 }' 00:23:34.650 12:43:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:34.650 [2024-10-01 12:43:17.115286] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:34.650 12:43:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.650 12:43:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:34.651 12:43:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.651 12:43:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:34.909 [2024-10-01 12:43:17.424137] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:35.480 [2024-10-01 12:43:17.730798] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:35.480 [2024-10-01 12:43:17.731107] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:35.480 [2024-10-01 12:43:17.856455] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:35.740 12:43:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:35.740 12:43:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.740 12:43:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:35.740 12:43:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:35.740 12:43:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:35.740 12:43:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:35.740 12:43:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.740 12:43:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.998 [2024-10-01 12:43:18.284659] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:35.998 12:43:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:35.998 "name": "raid_bdev1", 00:23:35.998 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:35.998 "strip_size_kb": 0, 00:23:35.998 "state": "online", 00:23:35.998 "raid_level": "raid1", 00:23:35.998 "superblock": false, 00:23:35.998 "num_base_bdevs": 2, 00:23:35.998 "num_base_bdevs_discovered": 2, 00:23:35.998 "num_base_bdevs_operational": 2, 00:23:35.998 "process": { 00:23:35.998 "type": "rebuild", 00:23:35.998 "target": "spare", 00:23:35.998 "progress": { 00:23:35.998 "blocks": 40960, 00:23:35.998 "percent": 62 00:23:35.998 } 00:23:35.998 }, 00:23:35.998 "base_bdevs_list": [ 00:23:35.998 { 00:23:35.998 "name": "spare", 00:23:35.998 "uuid": "5a1d9f0c-d644-5675-869c-bb311fe6e503", 00:23:35.998 "is_configured": true, 00:23:35.998 "data_offset": 0, 00:23:35.998 "data_size": 65536 00:23:35.998 }, 00:23:35.998 { 00:23:35.998 "name": "BaseBdev2", 00:23:35.998 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:35.998 "is_configured": true, 00:23:35.998 "data_offset": 0, 00:23:35.998 "data_size": 65536 00:23:35.998 } 00:23:35.998 ] 00:23:35.998 }' 00:23:35.998 12:43:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:35.998 12:43:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.998 12:43:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:35.998 12:43:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:35.998 12:43:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:35.998 [2024-10-01 12:43:18.503911] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:36.566 [2024-10-01 12:43:18.839161] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:36.824 [2024-10-01 12:43:19.176974] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:23:37.081 [2024-10-01 12:43:19.383201] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:23:37.081 [2024-10-01 12:43:19.383367] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:23:37.081 12:43:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:37.081 12:43:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.081 12:43:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:37.081 12:43:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:37.081 12:43:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:37.081 12:43:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:37.081 12:43:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.081 12:43:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.339 12:43:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:37.339 "name": "raid_bdev1", 00:23:37.339 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:37.339 "strip_size_kb": 0, 00:23:37.339 "state": "online", 00:23:37.339 "raid_level": "raid1", 00:23:37.339 "superblock": false, 00:23:37.339 "num_base_bdevs": 2, 00:23:37.339 "num_base_bdevs_discovered": 2, 00:23:37.339 "num_base_bdevs_operational": 2, 00:23:37.339 "process": { 00:23:37.339 "type": "rebuild", 00:23:37.339 "target": "spare", 00:23:37.339 "progress": { 00:23:37.339 "blocks": 61440, 00:23:37.339 "percent": 93 00:23:37.339 } 00:23:37.339 }, 00:23:37.339 "base_bdevs_list": [ 00:23:37.339 { 00:23:37.339 "name": "spare", 00:23:37.339 "uuid": "5a1d9f0c-d644-5675-869c-bb311fe6e503", 00:23:37.339 "is_configured": true, 00:23:37.339 "data_offset": 0, 00:23:37.339 "data_size": 65536 00:23:37.339 }, 00:23:37.339 { 00:23:37.339 "name": "BaseBdev2", 00:23:37.339 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:37.339 "is_configured": true, 00:23:37.339 "data_offset": 0, 00:23:37.339 "data_size": 65536 00:23:37.339 } 00:23:37.339 ] 00:23:37.339 }' 00:23:37.339 12:43:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:37.339 12:43:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:37.339 12:43:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:37.339 12:43:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:37.339 12:43:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:37.339 [2024-10-01 12:43:19.810847] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:37.599 [2024-10-01 12:43:19.910748] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:37.599 [2024-10-01 12:43:19.912421] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:38.535 "name": "raid_bdev1", 00:23:38.535 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:38.535 "strip_size_kb": 0, 00:23:38.535 "state": "online", 00:23:38.535 "raid_level": "raid1", 00:23:38.535 "superblock": false, 00:23:38.535 "num_base_bdevs": 2, 00:23:38.535 "num_base_bdevs_discovered": 2, 00:23:38.535 "num_base_bdevs_operational": 2, 00:23:38.535 "base_bdevs_list": [ 00:23:38.535 { 00:23:38.535 "name": "spare", 00:23:38.535 "uuid": "5a1d9f0c-d644-5675-869c-bb311fe6e503", 00:23:38.535 "is_configured": true, 00:23:38.535 "data_offset": 0, 00:23:38.535 "data_size": 65536 00:23:38.535 }, 00:23:38.535 { 00:23:38.535 "name": "BaseBdev2", 00:23:38.535 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:38.535 "is_configured": true, 00:23:38.535 "data_offset": 0, 00:23:38.535 "data_size": 65536 00:23:38.535 } 00:23:38.535 ] 00:23:38.535 }' 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:38.535 12:43:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:38.535 12:43:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:38.535 12:43:21 -- bdev/bdev_raid.sh@660 -- # break 00:23:38.535 12:43:21 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:38.535 12:43:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:38.535 12:43:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:38.535 12:43:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:38.535 12:43:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:38.535 12:43:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.535 12:43:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:38.795 "name": "raid_bdev1", 00:23:38.795 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:38.795 "strip_size_kb": 0, 00:23:38.795 "state": "online", 00:23:38.795 "raid_level": "raid1", 00:23:38.795 "superblock": false, 00:23:38.795 "num_base_bdevs": 2, 00:23:38.795 "num_base_bdevs_discovered": 2, 00:23:38.795 "num_base_bdevs_operational": 2, 00:23:38.795 "base_bdevs_list": [ 00:23:38.795 { 00:23:38.795 "name": "spare", 00:23:38.795 "uuid": "5a1d9f0c-d644-5675-869c-bb311fe6e503", 00:23:38.795 "is_configured": true, 00:23:38.795 "data_offset": 0, 00:23:38.795 "data_size": 65536 00:23:38.795 }, 00:23:38.795 { 00:23:38.795 "name": "BaseBdev2", 00:23:38.795 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:38.795 "is_configured": true, 00:23:38.795 "data_offset": 0, 00:23:38.795 "data_size": 65536 00:23:38.795 } 00:23:38.795 ] 00:23:38.795 }' 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.795 12:43:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.054 12:43:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:39.054 "name": "raid_bdev1", 00:23:39.054 "uuid": "eeac0d94-7b84-4d15-b352-c1e42e4e44e0", 00:23:39.054 "strip_size_kb": 0, 00:23:39.054 "state": "online", 00:23:39.054 "raid_level": "raid1", 00:23:39.054 "superblock": false, 00:23:39.054 "num_base_bdevs": 2, 00:23:39.054 "num_base_bdevs_discovered": 2, 00:23:39.054 "num_base_bdevs_operational": 2, 00:23:39.054 "base_bdevs_list": [ 00:23:39.054 { 00:23:39.054 "name": "spare", 00:23:39.054 "uuid": "5a1d9f0c-d644-5675-869c-bb311fe6e503", 00:23:39.054 "is_configured": true, 00:23:39.054 "data_offset": 0, 00:23:39.054 "data_size": 65536 00:23:39.054 }, 00:23:39.054 { 00:23:39.054 "name": "BaseBdev2", 00:23:39.054 "uuid": "c4d5d373-0386-46e0-abac-6b6264c8d9ea", 00:23:39.054 "is_configured": true, 00:23:39.054 "data_offset": 0, 00:23:39.054 "data_size": 65536 00:23:39.054 } 00:23:39.054 ] 00:23:39.054 }' 00:23:39.054 12:43:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:39.054 12:43:21 -- common/autotest_common.sh@10 -- # set +x 00:23:39.622 12:43:22 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:39.880 [2024-10-01 12:43:22.190653] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:39.880 [2024-10-01 12:43:22.190701] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.880 00:23:39.880 Latency(us) 00:23:39.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.880 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:39.880 raid_bdev1 : 10.57 122.15 366.45 0.00 0.00 11305.94 289.52 112016.55 00:23:39.880 =================================================================================================================== 00:23:39.880 Total : 122.15 366.45 0.00 0.00 11305.94 289.52 112016.55 00:23:39.881 [2024-10-01 12:43:22.279558] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.881 [2024-10-01 12:43:22.279602] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.881 [2024-10-01 12:43:22.279673] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.881 [2024-10-01 12:43:22.279683] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:23:39.881 0 00:23:39.881 12:43:22 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.881 12:43:22 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:40.139 12:43:22 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:40.139 12:43:22 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:40.139 12:43:22 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:40.139 12:43:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:40.139 12:43:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:40.139 12:43:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:40.139 12:43:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:40.139 12:43:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:40.139 12:43:22 -- bdev/nbd_common.sh@12 -- # local i 00:23:40.139 12:43:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:40.139 12:43:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.139 12:43:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:40.398 /dev/nbd0 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:40.398 12:43:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:40.398 12:43:22 -- common/autotest_common.sh@857 -- # local i 00:23:40.398 12:43:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:40.398 12:43:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:40.398 12:43:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:40.398 12:43:22 -- common/autotest_common.sh@861 -- # break 00:23:40.398 12:43:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:40.398 12:43:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:40.398 12:43:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:40.398 1+0 records in 00:23:40.398 1+0 records out 00:23:40.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565778 s, 7.2 MB/s 00:23:40.398 12:43:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.398 12:43:22 -- common/autotest_common.sh@874 -- # size=4096 00:23:40.398 12:43:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.398 12:43:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:40.398 12:43:22 -- common/autotest_common.sh@877 -- # return 0 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.398 12:43:22 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:40.398 12:43:22 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:23:40.398 12:43:22 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@12 -- # local i 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.398 12:43:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:40.658 /dev/nbd1 00:23:40.658 12:43:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:40.658 12:43:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:40.658 12:43:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:40.658 12:43:22 -- common/autotest_common.sh@857 -- # local i 00:23:40.658 12:43:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:40.658 12:43:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:40.658 12:43:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:40.658 12:43:22 -- common/autotest_common.sh@861 -- # break 00:23:40.658 12:43:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:40.658 12:43:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:40.658 12:43:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:40.658 1+0 records in 00:23:40.658 1+0 records out 00:23:40.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593442 s, 6.9 MB/s 00:23:40.658 12:43:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.658 12:43:23 -- common/autotest_common.sh@874 -- # size=4096 00:23:40.658 12:43:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.658 12:43:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:40.658 12:43:23 -- common/autotest_common.sh@877 -- # return 0 00:23:40.658 12:43:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.658 12:43:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.658 12:43:23 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:40.916 12:43:23 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@51 -- # local i 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@41 -- # break 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.916 12:43:23 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@51 -- # local i 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.916 12:43:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:41.175 12:43:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:41.175 12:43:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:41.175 12:43:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:41.175 12:43:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:41.175 12:43:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:41.175 12:43:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:41.175 12:43:23 -- bdev/nbd_common.sh@41 -- # break 00:23:41.175 12:43:23 -- bdev/nbd_common.sh@45 -- # return 0 00:23:41.175 12:43:23 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:41.175 12:43:23 -- bdev/bdev_raid.sh@709 -- # killprocess 123861 00:23:41.175 12:43:23 -- common/autotest_common.sh@926 -- # '[' -z 123861 ']' 00:23:41.175 12:43:23 -- common/autotest_common.sh@930 -- # kill -0 123861 00:23:41.175 12:43:23 -- common/autotest_common.sh@931 -- # uname 00:23:41.175 12:43:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:41.175 12:43:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123861 00:23:41.175 12:43:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:41.175 killing process with pid 123861 00:23:41.175 12:43:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:41.175 12:43:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123861' 00:23:41.175 Received shutdown signal, test time was about 11.959179 seconds 00:23:41.175 00:23:41.175 Latency(us) 00:23:41.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.175 =================================================================================================================== 00:23:41.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.175 12:43:23 -- common/autotest_common.sh@945 -- # kill 123861 00:23:41.175 [2024-10-01 12:43:23.646257] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:41.175 12:43:23 -- common/autotest_common.sh@950 -- # wait 123861 00:23:41.433 [2024-10-01 12:43:23.880873] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:43.336 ************************************ 00:23:43.336 END TEST raid_rebuild_test_io 00:23:43.336 ************************************ 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:43.336 00:23:43.336 real 0m17.600s 00:23:43.336 user 0m24.994s 00:23:43.336 sys 0m2.313s 00:23:43.336 12:43:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.336 12:43:25 -- common/autotest_common.sh@10 -- # set +x 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:23:43.336 12:43:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:43.336 12:43:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:43.336 12:43:25 -- common/autotest_common.sh@10 -- # set +x 00:23:43.336 ************************************ 00:23:43.336 START TEST raid_rebuild_test_sb_io 00:23:43.336 ************************************ 00:23:43.336 12:43:25 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:43.336 12:43:25 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:43.337 12:43:25 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:43.337 12:43:25 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:43.337 12:43:25 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:43.337 12:43:25 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:43.337 12:43:25 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:43.337 12:43:25 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:43.337 12:43:25 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:43.337 12:43:25 -- bdev/bdev_raid.sh@544 -- # raid_pid=124339 00:23:43.337 12:43:25 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:43.337 12:43:25 -- bdev/bdev_raid.sh@545 -- # waitforlisten 124339 /var/tmp/spdk-raid.sock 00:23:43.337 12:43:25 -- common/autotest_common.sh@819 -- # '[' -z 124339 ']' 00:23:43.337 12:43:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:43.337 12:43:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:43.337 12:43:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:43.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:43.337 12:43:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:43.337 12:43:25 -- common/autotest_common.sh@10 -- # set +x 00:23:43.337 [2024-10-01 12:43:25.534145] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:43.337 [2024-10-01 12:43:25.534769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124339 ] 00:23:43.337 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:43.337 Zero copy mechanism will not be used. 00:23:43.337 [2024-10-01 12:43:25.702137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.595 [2024-10-01 12:43:25.904459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.853 [2024-10-01 12:43:26.132791] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:43.853 12:43:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:43.853 12:43:26 -- common/autotest_common.sh@852 -- # return 0 00:23:43.853 12:43:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:43.853 12:43:26 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:43.853 12:43:26 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:44.112 BaseBdev1_malloc 00:23:44.112 12:43:26 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:44.408 [2024-10-01 12:43:26.746635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:44.408 [2024-10-01 12:43:26.746720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.408 [2024-10-01 12:43:26.746764] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:44.408 [2024-10-01 12:43:26.746806] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.408 [2024-10-01 12:43:26.749066] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.408 [2024-10-01 12:43:26.749116] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:44.408 BaseBdev1 00:23:44.408 12:43:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:44.408 12:43:26 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:44.408 12:43:26 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:44.667 BaseBdev2_malloc 00:23:44.667 12:43:27 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:44.667 [2024-10-01 12:43:27.187885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:44.667 [2024-10-01 12:43:27.187963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.667 [2024-10-01 12:43:27.188016] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:44.667 [2024-10-01 12:43:27.188061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.667 [2024-10-01 12:43:27.190226] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.667 [2024-10-01 12:43:27.190270] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:44.667 BaseBdev2 00:23:44.926 12:43:27 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:44.926 spare_malloc 00:23:44.926 12:43:27 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:45.185 spare_delay 00:23:45.185 12:43:27 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:45.445 [2024-10-01 12:43:27.804226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:45.445 [2024-10-01 12:43:27.804287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.445 [2024-10-01 12:43:27.804321] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:23:45.445 [2024-10-01 12:43:27.804359] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.445 [2024-10-01 12:43:27.806548] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.445 [2024-10-01 12:43:27.806606] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:45.445 spare 00:23:45.445 12:43:27 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:45.705 [2024-10-01 12:43:27.988067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:45.705 [2024-10-01 12:43:27.990046] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.705 [2024-10-01 12:43:27.990193] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:23:45.705 [2024-10-01 12:43:27.990202] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:45.705 [2024-10-01 12:43:27.990319] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:45.705 [2024-10-01 12:43:27.990654] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:23:45.705 [2024-10-01 12:43:27.990673] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:23:45.705 [2024-10-01 12:43:27.990810] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.705 12:43:27 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:45.705 12:43:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:45.705 12:43:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:45.705 12:43:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:45.705 12:43:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:45.705 12:43:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:45.705 12:43:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:45.705 12:43:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:45.705 12:43:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:45.705 12:43:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:45.705 12:43:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.705 12:43:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.705 12:43:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.705 "name": "raid_bdev1", 00:23:45.705 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:45.705 "strip_size_kb": 0, 00:23:45.705 "state": "online", 00:23:45.705 "raid_level": "raid1", 00:23:45.705 "superblock": true, 00:23:45.705 "num_base_bdevs": 2, 00:23:45.705 "num_base_bdevs_discovered": 2, 00:23:45.705 "num_base_bdevs_operational": 2, 00:23:45.705 "base_bdevs_list": [ 00:23:45.705 { 00:23:45.705 "name": "BaseBdev1", 00:23:45.705 "uuid": "a5ba383f-2f24-5f03-8663-d3cba4e30194", 00:23:45.705 "is_configured": true, 00:23:45.705 "data_offset": 2048, 00:23:45.705 "data_size": 63488 00:23:45.705 }, 00:23:45.705 { 00:23:45.705 "name": "BaseBdev2", 00:23:45.705 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:45.705 "is_configured": true, 00:23:45.705 "data_offset": 2048, 00:23:45.705 "data_size": 63488 00:23:45.705 } 00:23:45.705 ] 00:23:45.705 }' 00:23:45.705 12:43:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.705 12:43:28 -- common/autotest_common.sh@10 -- # set +x 00:23:46.275 12:43:28 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:46.275 12:43:28 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:46.535 [2024-10-01 12:43:28.890916] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.535 12:43:28 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:46.535 12:43:28 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:46.535 12:43:28 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.794 12:43:29 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:46.794 12:43:29 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:46.794 12:43:29 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:46.794 12:43:29 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:46.794 [2024-10-01 12:43:29.170775] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:46.794 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:46.794 Zero copy mechanism will not be used. 00:23:46.795 Running I/O for 60 seconds... 00:23:46.795 [2024-10-01 12:43:29.265899] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:46.795 [2024-10-01 12:43:29.276508] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.795 12:43:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.054 12:43:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.054 "name": "raid_bdev1", 00:23:47.054 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:47.054 "strip_size_kb": 0, 00:23:47.054 "state": "online", 00:23:47.054 "raid_level": "raid1", 00:23:47.054 "superblock": true, 00:23:47.054 "num_base_bdevs": 2, 00:23:47.054 "num_base_bdevs_discovered": 1, 00:23:47.054 "num_base_bdevs_operational": 1, 00:23:47.054 "base_bdevs_list": [ 00:23:47.054 { 00:23:47.054 "name": null, 00:23:47.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.054 "is_configured": false, 00:23:47.054 "data_offset": 2048, 00:23:47.054 "data_size": 63488 00:23:47.054 }, 00:23:47.054 { 00:23:47.054 "name": "BaseBdev2", 00:23:47.054 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:47.054 "is_configured": true, 00:23:47.054 "data_offset": 2048, 00:23:47.054 "data_size": 63488 00:23:47.054 } 00:23:47.054 ] 00:23:47.054 }' 00:23:47.054 12:43:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.054 12:43:29 -- common/autotest_common.sh@10 -- # set +x 00:23:47.627 12:43:30 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:47.887 [2024-10-01 12:43:30.206398] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:47.887 [2024-10-01 12:43:30.206456] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:47.887 12:43:30 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:47.887 [2024-10-01 12:43:30.255451] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:47.887 [2024-10-01 12:43:30.257338] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:47.887 [2024-10-01 12:43:30.380828] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:47.887 [2024-10-01 12:43:30.381107] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:48.146 [2024-10-01 12:43:30.587283] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:48.146 [2024-10-01 12:43:30.587427] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:48.406 [2024-10-01 12:43:30.907192] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:48.666 [2024-10-01 12:43:31.132612] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:48.926 12:43:31 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.926 12:43:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:48.926 12:43:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:48.926 12:43:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:48.926 12:43:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:48.926 12:43:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.926 12:43:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.926 [2024-10-01 12:43:31.363257] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:48.926 12:43:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:48.926 "name": "raid_bdev1", 00:23:48.926 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:48.926 "strip_size_kb": 0, 00:23:48.926 "state": "online", 00:23:48.926 "raid_level": "raid1", 00:23:48.926 "superblock": true, 00:23:48.926 "num_base_bdevs": 2, 00:23:48.926 "num_base_bdevs_discovered": 2, 00:23:48.926 "num_base_bdevs_operational": 2, 00:23:48.926 "process": { 00:23:48.926 "type": "rebuild", 00:23:48.926 "target": "spare", 00:23:48.926 "progress": { 00:23:48.926 "blocks": 14336, 00:23:48.926 "percent": 22 00:23:48.926 } 00:23:48.926 }, 00:23:48.926 "base_bdevs_list": [ 00:23:48.926 { 00:23:48.926 "name": "spare", 00:23:48.926 "uuid": "e5ce65c9-0e96-58e5-8ead-9ef5e56dd7d6", 00:23:48.926 "is_configured": true, 00:23:48.926 "data_offset": 2048, 00:23:48.926 "data_size": 63488 00:23:48.926 }, 00:23:48.926 { 00:23:48.926 "name": "BaseBdev2", 00:23:48.926 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:48.926 "is_configured": true, 00:23:48.926 "data_offset": 2048, 00:23:48.926 "data_size": 63488 00:23:48.926 } 00:23:48.926 ] 00:23:48.926 }' 00:23:48.926 12:43:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:49.185 12:43:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.185 12:43:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:49.185 12:43:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.185 12:43:31 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:49.185 [2024-10-01 12:43:31.582351] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:49.185 [2024-10-01 12:43:31.708559] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:49.444 [2024-10-01 12:43:31.890516] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:49.444 [2024-10-01 12:43:31.892154] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.444 [2024-10-01 12:43:31.931747] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.444 12:43:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.704 12:43:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:49.704 "name": "raid_bdev1", 00:23:49.704 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:49.704 "strip_size_kb": 0, 00:23:49.704 "state": "online", 00:23:49.704 "raid_level": "raid1", 00:23:49.704 "superblock": true, 00:23:49.704 "num_base_bdevs": 2, 00:23:49.704 "num_base_bdevs_discovered": 1, 00:23:49.704 "num_base_bdevs_operational": 1, 00:23:49.704 "base_bdevs_list": [ 00:23:49.704 { 00:23:49.704 "name": null, 00:23:49.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.704 "is_configured": false, 00:23:49.704 "data_offset": 2048, 00:23:49.704 "data_size": 63488 00:23:49.704 }, 00:23:49.704 { 00:23:49.704 "name": "BaseBdev2", 00:23:49.704 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:49.704 "is_configured": true, 00:23:49.704 "data_offset": 2048, 00:23:49.704 "data_size": 63488 00:23:49.704 } 00:23:49.704 ] 00:23:49.704 }' 00:23:49.704 12:43:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:49.704 12:43:32 -- common/autotest_common.sh@10 -- # set +x 00:23:50.274 12:43:32 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:50.274 12:43:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:50.274 12:43:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:50.274 12:43:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:50.274 12:43:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:50.274 12:43:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.274 12:43:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.534 12:43:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:50.534 "name": "raid_bdev1", 00:23:50.534 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:50.534 "strip_size_kb": 0, 00:23:50.534 "state": "online", 00:23:50.534 "raid_level": "raid1", 00:23:50.534 "superblock": true, 00:23:50.534 "num_base_bdevs": 2, 00:23:50.534 "num_base_bdevs_discovered": 1, 00:23:50.534 "num_base_bdevs_operational": 1, 00:23:50.534 "base_bdevs_list": [ 00:23:50.534 { 00:23:50.534 "name": null, 00:23:50.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.534 "is_configured": false, 00:23:50.534 "data_offset": 2048, 00:23:50.534 "data_size": 63488 00:23:50.534 }, 00:23:50.534 { 00:23:50.534 "name": "BaseBdev2", 00:23:50.534 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:50.534 "is_configured": true, 00:23:50.534 "data_offset": 2048, 00:23:50.534 "data_size": 63488 00:23:50.534 } 00:23:50.534 ] 00:23:50.534 }' 00:23:50.534 12:43:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:50.534 12:43:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:50.534 12:43:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:50.534 12:43:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:50.534 12:43:33 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:50.794 [2024-10-01 12:43:33.207834] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:50.794 [2024-10-01 12:43:33.207918] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:50.794 [2024-10-01 12:43:33.256587] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:50.794 [2024-10-01 12:43:33.258472] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:50.794 12:43:33 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:51.054 [2024-10-01 12:43:33.376243] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:51.054 [2024-10-01 12:43:33.376526] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:51.054 [2024-10-01 12:43:33.584382] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:51.054 [2024-10-01 12:43:33.584558] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:51.622 [2024-10-01 12:43:33.915024] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:51.622 [2024-10-01 12:43:33.915385] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:51.622 [2024-10-01 12:43:34.140752] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:51.622 [2024-10-01 12:43:34.140953] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:51.882 12:43:34 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.882 12:43:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:51.882 12:43:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:51.882 12:43:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:51.882 12:43:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:51.882 12:43:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.882 12:43:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:52.142 "name": "raid_bdev1", 00:23:52.142 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:52.142 "strip_size_kb": 0, 00:23:52.142 "state": "online", 00:23:52.142 "raid_level": "raid1", 00:23:52.142 "superblock": true, 00:23:52.142 "num_base_bdevs": 2, 00:23:52.142 "num_base_bdevs_discovered": 2, 00:23:52.142 "num_base_bdevs_operational": 2, 00:23:52.142 "process": { 00:23:52.142 "type": "rebuild", 00:23:52.142 "target": "spare", 00:23:52.142 "progress": { 00:23:52.142 "blocks": 12288, 00:23:52.142 "percent": 19 00:23:52.142 } 00:23:52.142 }, 00:23:52.142 "base_bdevs_list": [ 00:23:52.142 { 00:23:52.142 "name": "spare", 00:23:52.142 "uuid": "e5ce65c9-0e96-58e5-8ead-9ef5e56dd7d6", 00:23:52.142 "is_configured": true, 00:23:52.142 "data_offset": 2048, 00:23:52.142 "data_size": 63488 00:23:52.142 }, 00:23:52.142 { 00:23:52.142 "name": "BaseBdev2", 00:23:52.142 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:52.142 "is_configured": true, 00:23:52.142 "data_offset": 2048, 00:23:52.142 "data_size": 63488 00:23:52.142 } 00:23:52.142 ] 00:23:52.142 }' 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:52.142 [2024-10-01 12:43:34.469869] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:52.142 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@657 -- # local timeout=401 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.142 12:43:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.402 [2024-10-01 12:43:34.688445] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:52.402 [2024-10-01 12:43:34.688694] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:52.402 12:43:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:52.402 "name": "raid_bdev1", 00:23:52.402 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:52.402 "strip_size_kb": 0, 00:23:52.402 "state": "online", 00:23:52.402 "raid_level": "raid1", 00:23:52.402 "superblock": true, 00:23:52.402 "num_base_bdevs": 2, 00:23:52.402 "num_base_bdevs_discovered": 2, 00:23:52.402 "num_base_bdevs_operational": 2, 00:23:52.402 "process": { 00:23:52.402 "type": "rebuild", 00:23:52.402 "target": "spare", 00:23:52.402 "progress": { 00:23:52.402 "blocks": 16384, 00:23:52.402 "percent": 25 00:23:52.402 } 00:23:52.402 }, 00:23:52.402 "base_bdevs_list": [ 00:23:52.402 { 00:23:52.402 "name": "spare", 00:23:52.402 "uuid": "e5ce65c9-0e96-58e5-8ead-9ef5e56dd7d6", 00:23:52.402 "is_configured": true, 00:23:52.402 "data_offset": 2048, 00:23:52.402 "data_size": 63488 00:23:52.402 }, 00:23:52.402 { 00:23:52.402 "name": "BaseBdev2", 00:23:52.402 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:52.402 "is_configured": true, 00:23:52.402 "data_offset": 2048, 00:23:52.402 "data_size": 63488 00:23:52.402 } 00:23:52.402 ] 00:23:52.402 }' 00:23:52.402 12:43:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:52.402 12:43:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:52.402 12:43:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:52.402 12:43:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.402 12:43:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:52.662 [2024-10-01 12:43:34.999451] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:52.922 [2024-10-01 12:43:35.213009] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:52.922 [2024-10-01 12:43:35.213191] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:52.922 [2024-10-01 12:43:35.445621] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:53.181 [2024-10-01 12:43:35.658291] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:53.440 12:43:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:53.440 12:43:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.440 12:43:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:53.440 12:43:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:53.440 12:43:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:53.440 12:43:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:53.440 12:43:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.440 12:43:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.700 [2024-10-01 12:43:35.978845] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:53.700 12:43:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:53.700 "name": "raid_bdev1", 00:23:53.700 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:53.700 "strip_size_kb": 0, 00:23:53.700 "state": "online", 00:23:53.700 "raid_level": "raid1", 00:23:53.700 "superblock": true, 00:23:53.700 "num_base_bdevs": 2, 00:23:53.700 "num_base_bdevs_discovered": 2, 00:23:53.700 "num_base_bdevs_operational": 2, 00:23:53.700 "process": { 00:23:53.700 "type": "rebuild", 00:23:53.700 "target": "spare", 00:23:53.700 "progress": { 00:23:53.700 "blocks": 32768, 00:23:53.700 "percent": 51 00:23:53.700 } 00:23:53.700 }, 00:23:53.700 "base_bdevs_list": [ 00:23:53.700 { 00:23:53.700 "name": "spare", 00:23:53.700 "uuid": "e5ce65c9-0e96-58e5-8ead-9ef5e56dd7d6", 00:23:53.700 "is_configured": true, 00:23:53.700 "data_offset": 2048, 00:23:53.700 "data_size": 63488 00:23:53.700 }, 00:23:53.700 { 00:23:53.700 "name": "BaseBdev2", 00:23:53.700 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:53.700 "is_configured": true, 00:23:53.700 "data_offset": 2048, 00:23:53.700 "data_size": 63488 00:23:53.700 } 00:23:53.700 ] 00:23:53.700 }' 00:23:53.700 12:43:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:53.700 12:43:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.700 12:43:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:53.700 12:43:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.700 12:43:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:53.959 [2024-10-01 12:43:36.323244] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:23:53.959 [2024-10-01 12:43:36.323514] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:23:54.218 [2024-10-01 12:43:36.529734] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:54.218 [2024-10-01 12:43:36.529882] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:54.477 [2024-10-01 12:43:36.933920] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:54.477 [2024-10-01 12:43:36.934094] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:54.736 12:43:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:54.736 12:43:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.736 12:43:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:54.736 12:43:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:54.736 12:43:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:54.736 12:43:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:54.736 12:43:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.736 12:43:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.995 12:43:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:54.995 "name": "raid_bdev1", 00:23:54.995 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:54.995 "strip_size_kb": 0, 00:23:54.995 "state": "online", 00:23:54.995 "raid_level": "raid1", 00:23:54.995 "superblock": true, 00:23:54.995 "num_base_bdevs": 2, 00:23:54.995 "num_base_bdevs_discovered": 2, 00:23:54.995 "num_base_bdevs_operational": 2, 00:23:54.995 "process": { 00:23:54.995 "type": "rebuild", 00:23:54.995 "target": "spare", 00:23:54.995 "progress": { 00:23:54.995 "blocks": 51200, 00:23:54.995 "percent": 80 00:23:54.995 } 00:23:54.995 }, 00:23:54.996 "base_bdevs_list": [ 00:23:54.996 { 00:23:54.996 "name": "spare", 00:23:54.996 "uuid": "e5ce65c9-0e96-58e5-8ead-9ef5e56dd7d6", 00:23:54.996 "is_configured": true, 00:23:54.996 "data_offset": 2048, 00:23:54.996 "data_size": 63488 00:23:54.996 }, 00:23:54.996 { 00:23:54.996 "name": "BaseBdev2", 00:23:54.996 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:54.996 "is_configured": true, 00:23:54.996 "data_offset": 2048, 00:23:54.996 "data_size": 63488 00:23:54.996 } 00:23:54.996 ] 00:23:54.996 }' 00:23:54.996 12:43:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:54.996 12:43:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.996 12:43:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:54.996 12:43:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.996 12:43:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:55.564 [2024-10-01 12:43:37.900035] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:55.564 [2024-10-01 12:43:38.005176] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:55.564 [2024-10-01 12:43:38.007200] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.133 12:43:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:56.133 12:43:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:56.133 12:43:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:56.133 12:43:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:56.133 12:43:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:56.133 12:43:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:56.133 12:43:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.133 12:43:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.133 12:43:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:56.133 "name": "raid_bdev1", 00:23:56.133 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:56.133 "strip_size_kb": 0, 00:23:56.133 "state": "online", 00:23:56.133 "raid_level": "raid1", 00:23:56.133 "superblock": true, 00:23:56.133 "num_base_bdevs": 2, 00:23:56.133 "num_base_bdevs_discovered": 2, 00:23:56.133 "num_base_bdevs_operational": 2, 00:23:56.133 "base_bdevs_list": [ 00:23:56.133 { 00:23:56.133 "name": "spare", 00:23:56.133 "uuid": "e5ce65c9-0e96-58e5-8ead-9ef5e56dd7d6", 00:23:56.133 "is_configured": true, 00:23:56.133 "data_offset": 2048, 00:23:56.133 "data_size": 63488 00:23:56.133 }, 00:23:56.134 { 00:23:56.134 "name": "BaseBdev2", 00:23:56.134 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:56.134 "is_configured": true, 00:23:56.134 "data_offset": 2048, 00:23:56.134 "data_size": 63488 00:23:56.134 } 00:23:56.134 ] 00:23:56.134 }' 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@660 -- # break 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.134 12:43:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.393 12:43:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:56.393 "name": "raid_bdev1", 00:23:56.393 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:56.393 "strip_size_kb": 0, 00:23:56.393 "state": "online", 00:23:56.393 "raid_level": "raid1", 00:23:56.393 "superblock": true, 00:23:56.393 "num_base_bdevs": 2, 00:23:56.393 "num_base_bdevs_discovered": 2, 00:23:56.393 "num_base_bdevs_operational": 2, 00:23:56.393 "base_bdevs_list": [ 00:23:56.393 { 00:23:56.393 "name": "spare", 00:23:56.393 "uuid": "e5ce65c9-0e96-58e5-8ead-9ef5e56dd7d6", 00:23:56.393 "is_configured": true, 00:23:56.393 "data_offset": 2048, 00:23:56.393 "data_size": 63488 00:23:56.393 }, 00:23:56.393 { 00:23:56.393 "name": "BaseBdev2", 00:23:56.393 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:56.393 "is_configured": true, 00:23:56.393 "data_offset": 2048, 00:23:56.393 "data_size": 63488 00:23:56.393 } 00:23:56.393 ] 00:23:56.393 }' 00:23:56.393 12:43:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:56.393 12:43:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:56.393 12:43:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.652 12:43:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.652 12:43:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:56.652 "name": "raid_bdev1", 00:23:56.652 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:23:56.652 "strip_size_kb": 0, 00:23:56.652 "state": "online", 00:23:56.652 "raid_level": "raid1", 00:23:56.652 "superblock": true, 00:23:56.652 "num_base_bdevs": 2, 00:23:56.652 "num_base_bdevs_discovered": 2, 00:23:56.652 "num_base_bdevs_operational": 2, 00:23:56.652 "base_bdevs_list": [ 00:23:56.652 { 00:23:56.652 "name": "spare", 00:23:56.652 "uuid": "e5ce65c9-0e96-58e5-8ead-9ef5e56dd7d6", 00:23:56.652 "is_configured": true, 00:23:56.652 "data_offset": 2048, 00:23:56.652 "data_size": 63488 00:23:56.652 }, 00:23:56.652 { 00:23:56.652 "name": "BaseBdev2", 00:23:56.652 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:23:56.652 "is_configured": true, 00:23:56.652 "data_offset": 2048, 00:23:56.652 "data_size": 63488 00:23:56.652 } 00:23:56.652 ] 00:23:56.652 }' 00:23:56.652 12:43:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:56.652 12:43:39 -- common/autotest_common.sh@10 -- # set +x 00:23:57.219 12:43:39 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:57.479 [2024-10-01 12:43:39.836328] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:57.479 [2024-10-01 12:43:39.836367] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:57.479 00:23:57.479 Latency(us) 00:23:57.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.479 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:57.479 raid_bdev1 : 10.78 115.85 347.56 0.00 0.00 11311.47 284.58 108647.63 00:23:57.479 =================================================================================================================== 00:23:57.479 Total : 115.85 347.56 0.00 0.00 11311.47 284.58 108647.63 00:23:57.479 [2024-10-01 12:43:39.957795] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.479 [2024-10-01 12:43:39.957831] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:57.479 [2024-10-01 12:43:39.957897] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:57.479 [2024-10-01 12:43:39.957905] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:23:57.479 0 00:23:57.479 12:43:39 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.479 12:43:39 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:57.738 12:43:40 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:57.738 12:43:40 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:57.738 12:43:40 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:57.738 12:43:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:57.738 12:43:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:57.738 12:43:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:57.738 12:43:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:57.738 12:43:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:57.738 12:43:40 -- bdev/nbd_common.sh@12 -- # local i 00:23:57.738 12:43:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:57.738 12:43:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.738 12:43:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:57.997 /dev/nbd0 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:57.998 12:43:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:57.998 12:43:40 -- common/autotest_common.sh@857 -- # local i 00:23:57.998 12:43:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:57.998 12:43:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:57.998 12:43:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:57.998 12:43:40 -- common/autotest_common.sh@861 -- # break 00:23:57.998 12:43:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:57.998 12:43:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:57.998 12:43:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:57.998 1+0 records in 00:23:57.998 1+0 records out 00:23:57.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000896319 s, 4.6 MB/s 00:23:57.998 12:43:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:57.998 12:43:40 -- common/autotest_common.sh@874 -- # size=4096 00:23:57.998 12:43:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:57.998 12:43:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:57.998 12:43:40 -- common/autotest_common.sh@877 -- # return 0 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.998 12:43:40 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:57.998 12:43:40 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:23:57.998 12:43:40 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@12 -- # local i 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.998 12:43:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:58.258 /dev/nbd1 00:23:58.258 12:43:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:58.258 12:43:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:58.258 12:43:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:58.258 12:43:40 -- common/autotest_common.sh@857 -- # local i 00:23:58.258 12:43:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:58.258 12:43:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:58.258 12:43:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:58.258 12:43:40 -- common/autotest_common.sh@861 -- # break 00:23:58.258 12:43:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:58.258 12:43:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:58.258 12:43:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:58.258 1+0 records in 00:23:58.258 1+0 records out 00:23:58.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000921264 s, 4.4 MB/s 00:23:58.258 12:43:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:58.258 12:43:40 -- common/autotest_common.sh@874 -- # size=4096 00:23:58.258 12:43:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:58.258 12:43:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:58.258 12:43:40 -- common/autotest_common.sh@877 -- # return 0 00:23:58.258 12:43:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:58.258 12:43:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:58.258 12:43:40 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:58.518 12:43:40 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:58.518 12:43:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:58.518 12:43:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:58.518 12:43:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:58.518 12:43:40 -- bdev/nbd_common.sh@51 -- # local i 00:23:58.518 12:43:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:58.518 12:43:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@41 -- # break 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@45 -- # return 0 00:23:58.778 12:43:41 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@51 -- # local i 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:58.778 12:43:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:59.038 12:43:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:59.038 12:43:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:59.038 12:43:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:59.038 12:43:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:59.038 12:43:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:59.038 12:43:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:59.038 12:43:41 -- bdev/nbd_common.sh@41 -- # break 00:23:59.038 12:43:41 -- bdev/nbd_common.sh@45 -- # return 0 00:23:59.038 12:43:41 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:59.038 12:43:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:59.038 12:43:41 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:59.038 12:43:41 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:59.038 12:43:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:59.297 [2024-10-01 12:43:41.681412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:59.297 [2024-10-01 12:43:41.681494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.297 [2024-10-01 12:43:41.681544] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:59.297 [2024-10-01 12:43:41.681570] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.297 [2024-10-01 12:43:41.683876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.297 [2024-10-01 12:43:41.683974] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:59.297 [2024-10-01 12:43:41.684088] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:59.297 [2024-10-01 12:43:41.684142] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:59.297 BaseBdev1 00:23:59.297 12:43:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:59.297 12:43:41 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:23:59.297 12:43:41 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:23:59.556 12:43:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:59.556 [2024-10-01 12:43:42.036922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:59.556 [2024-10-01 12:43:42.036981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.556 [2024-10-01 12:43:42.037010] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:59.556 [2024-10-01 12:43:42.037035] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.556 [2024-10-01 12:43:42.037434] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.556 [2024-10-01 12:43:42.037499] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:59.556 [2024-10-01 12:43:42.037616] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:23:59.556 [2024-10-01 12:43:42.037627] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:23:59.556 [2024-10-01 12:43:42.037634] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:59.556 [2024-10-01 12:43:42.037650] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:23:59.556 [2024-10-01 12:43:42.037725] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:59.556 BaseBdev2 00:23:59.556 12:43:42 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:59.815 12:43:42 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:00.076 [2024-10-01 12:43:42.392463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:00.076 [2024-10-01 12:43:42.392520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.076 [2024-10-01 12:43:42.392553] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:00.076 [2024-10-01 12:43:42.392573] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.076 [2024-10-01 12:43:42.393015] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.076 [2024-10-01 12:43:42.393067] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:00.076 [2024-10-01 12:43:42.393179] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:00.076 [2024-10-01 12:43:42.393205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:00.076 spare 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.076 [2024-10-01 12:43:42.493135] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:24:00.076 [2024-10-01 12:43:42.493153] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:00.076 [2024-10-01 12:43:42.493250] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:24:00.076 [2024-10-01 12:43:42.493591] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:24:00.076 [2024-10-01 12:43:42.493609] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:24:00.076 [2024-10-01 12:43:42.493741] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.076 "name": "raid_bdev1", 00:24:00.076 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:24:00.076 "strip_size_kb": 0, 00:24:00.076 "state": "online", 00:24:00.076 "raid_level": "raid1", 00:24:00.076 "superblock": true, 00:24:00.076 "num_base_bdevs": 2, 00:24:00.076 "num_base_bdevs_discovered": 2, 00:24:00.076 "num_base_bdevs_operational": 2, 00:24:00.076 "base_bdevs_list": [ 00:24:00.076 { 00:24:00.076 "name": "spare", 00:24:00.076 "uuid": "e5ce65c9-0e96-58e5-8ead-9ef5e56dd7d6", 00:24:00.076 "is_configured": true, 00:24:00.076 "data_offset": 2048, 00:24:00.076 "data_size": 63488 00:24:00.076 }, 00:24:00.076 { 00:24:00.076 "name": "BaseBdev2", 00:24:00.076 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:24:00.076 "is_configured": true, 00:24:00.076 "data_offset": 2048, 00:24:00.076 "data_size": 63488 00:24:00.076 } 00:24:00.076 ] 00:24:00.076 }' 00:24:00.076 12:43:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.077 12:43:42 -- common/autotest_common.sh@10 -- # set +x 00:24:00.646 12:43:43 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:00.646 12:43:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:00.646 12:43:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:00.646 12:43:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:00.646 12:43:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:00.646 12:43:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.646 12:43:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.906 12:43:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:00.906 "name": "raid_bdev1", 00:24:00.906 "uuid": "986a4eeb-4384-4da0-b216-0bc4e7f1ce88", 00:24:00.906 "strip_size_kb": 0, 00:24:00.906 "state": "online", 00:24:00.906 "raid_level": "raid1", 00:24:00.906 "superblock": true, 00:24:00.906 "num_base_bdevs": 2, 00:24:00.906 "num_base_bdevs_discovered": 2, 00:24:00.906 "num_base_bdevs_operational": 2, 00:24:00.906 "base_bdevs_list": [ 00:24:00.906 { 00:24:00.906 "name": "spare", 00:24:00.906 "uuid": "e5ce65c9-0e96-58e5-8ead-9ef5e56dd7d6", 00:24:00.906 "is_configured": true, 00:24:00.906 "data_offset": 2048, 00:24:00.906 "data_size": 63488 00:24:00.906 }, 00:24:00.906 { 00:24:00.906 "name": "BaseBdev2", 00:24:00.906 "uuid": "c6a6c24b-ca2d-5615-ac26-be05694b6001", 00:24:00.906 "is_configured": true, 00:24:00.906 "data_offset": 2048, 00:24:00.906 "data_size": 63488 00:24:00.906 } 00:24:00.906 ] 00:24:00.906 }' 00:24:00.906 12:43:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:00.906 12:43:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:00.906 12:43:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:01.166 12:43:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:01.166 12:43:43 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.166 12:43:43 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:01.166 12:43:43 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.166 12:43:43 -- bdev/bdev_raid.sh@709 -- # killprocess 124339 00:24:01.166 12:43:43 -- common/autotest_common.sh@926 -- # '[' -z 124339 ']' 00:24:01.166 12:43:43 -- common/autotest_common.sh@930 -- # kill -0 124339 00:24:01.167 12:43:43 -- common/autotest_common.sh@931 -- # uname 00:24:01.167 12:43:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:01.167 12:43:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124339 00:24:01.167 12:43:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:01.167 12:43:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:01.167 killing process with pid 124339 00:24:01.167 12:43:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124339' 00:24:01.167 Received shutdown signal, test time was about 14.506854 seconds 00:24:01.167 00:24:01.167 Latency(us) 00:24:01.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.167 =================================================================================================================== 00:24:01.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.167 12:43:43 -- common/autotest_common.sh@945 -- # kill 124339 00:24:01.167 [2024-10-01 12:43:43.656133] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:01.167 [2024-10-01 12:43:43.656219] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:01.167 12:43:43 -- common/autotest_common.sh@950 -- # wait 124339 00:24:01.167 [2024-10-01 12:43:43.656280] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:01.167 [2024-10-01 12:43:43.656289] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:24:01.426 [2024-10-01 12:43:43.890785] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:02.829 12:43:45 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:02.829 00:24:02.829 real 0m19.826s 00:24:02.829 user 0m29.551s 00:24:02.829 sys 0m2.864s 00:24:02.829 12:43:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.829 12:43:45 -- common/autotest_common.sh@10 -- # set +x 00:24:02.829 ************************************ 00:24:02.829 END TEST raid_rebuild_test_sb_io 00:24:02.829 ************************************ 00:24:02.829 12:43:45 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:24:02.829 12:43:45 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:24:02.829 12:43:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:02.829 12:43:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:02.829 12:43:45 -- common/autotest_common.sh@10 -- # set +x 00:24:03.089 ************************************ 00:24:03.089 START TEST raid_rebuild_test 00:24:03.089 ************************************ 00:24:03.089 12:43:45 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=124896 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 124896 /var/tmp/spdk-raid.sock 00:24:03.089 12:43:45 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:03.089 12:43:45 -- common/autotest_common.sh@819 -- # '[' -z 124896 ']' 00:24:03.089 12:43:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:03.089 12:43:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:03.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:03.089 12:43:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:03.089 12:43:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:03.089 12:43:45 -- common/autotest_common.sh@10 -- # set +x 00:24:03.089 [2024-10-01 12:43:45.445338] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:03.089 [2024-10-01 12:43:45.445493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124896 ] 00:24:03.089 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:03.089 Zero copy mechanism will not be used. 00:24:03.089 [2024-10-01 12:43:45.612081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.349 [2024-10-01 12:43:45.804431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.608 [2024-10-01 12:43:46.038148] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.867 12:43:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:03.867 12:43:46 -- common/autotest_common.sh@852 -- # return 0 00:24:03.868 12:43:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:03.868 12:43:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:03.868 12:43:46 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:04.127 BaseBdev1 00:24:04.127 12:43:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:04.127 12:43:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:04.127 12:43:46 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:04.387 BaseBdev2 00:24:04.387 12:43:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:04.387 12:43:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:04.387 12:43:46 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:04.647 BaseBdev3 00:24:04.647 12:43:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:04.647 12:43:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:04.647 12:43:46 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:04.647 BaseBdev4 00:24:04.647 12:43:47 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:04.907 spare_malloc 00:24:04.907 12:43:47 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:05.167 spare_delay 00:24:05.167 12:43:47 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:05.426 [2024-10-01 12:43:47.753465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:05.426 [2024-10-01 12:43:47.753556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.426 [2024-10-01 12:43:47.753600] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:24:05.426 [2024-10-01 12:43:47.753641] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.426 [2024-10-01 12:43:47.755810] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.426 [2024-10-01 12:43:47.755884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:05.426 spare 00:24:05.426 12:43:47 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:05.426 [2024-10-01 12:43:47.945249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:05.426 [2024-10-01 12:43:47.947109] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:05.426 [2024-10-01 12:43:47.947152] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:05.426 [2024-10-01 12:43:47.947178] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:05.426 [2024-10-01 12:43:47.947239] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:24:05.426 [2024-10-01 12:43:47.947247] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:05.426 [2024-10-01 12:43:47.947354] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:05.426 [2024-10-01 12:43:47.947672] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:24:05.426 [2024-10-01 12:43:47.947692] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:24:05.426 [2024-10-01 12:43:47.947836] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.686 12:43:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.686 12:43:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:05.686 "name": "raid_bdev1", 00:24:05.686 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:05.686 "strip_size_kb": 0, 00:24:05.686 "state": "online", 00:24:05.686 "raid_level": "raid1", 00:24:05.686 "superblock": false, 00:24:05.686 "num_base_bdevs": 4, 00:24:05.686 "num_base_bdevs_discovered": 4, 00:24:05.686 "num_base_bdevs_operational": 4, 00:24:05.686 "base_bdevs_list": [ 00:24:05.686 { 00:24:05.686 "name": "BaseBdev1", 00:24:05.686 "uuid": "79da9b2b-e432-4a93-912e-4c1640693bbe", 00:24:05.686 "is_configured": true, 00:24:05.686 "data_offset": 0, 00:24:05.686 "data_size": 65536 00:24:05.686 }, 00:24:05.686 { 00:24:05.686 "name": "BaseBdev2", 00:24:05.686 "uuid": "42d8e6db-08f0-4b31-a62b-5ec1fc15e805", 00:24:05.686 "is_configured": true, 00:24:05.686 "data_offset": 0, 00:24:05.686 "data_size": 65536 00:24:05.686 }, 00:24:05.686 { 00:24:05.686 "name": "BaseBdev3", 00:24:05.686 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:05.686 "is_configured": true, 00:24:05.686 "data_offset": 0, 00:24:05.686 "data_size": 65536 00:24:05.686 }, 00:24:05.686 { 00:24:05.686 "name": "BaseBdev4", 00:24:05.686 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:05.686 "is_configured": true, 00:24:05.686 "data_offset": 0, 00:24:05.686 "data_size": 65536 00:24:05.686 } 00:24:05.686 ] 00:24:05.686 }' 00:24:05.686 12:43:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:05.686 12:43:48 -- common/autotest_common.sh@10 -- # set +x 00:24:06.255 12:43:48 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:06.255 12:43:48 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:06.514 [2024-10-01 12:43:48.860165] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:06.514 12:43:48 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:24:06.514 12:43:48 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.514 12:43:48 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:06.774 12:43:49 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:06.774 12:43:49 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:06.774 12:43:49 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:06.774 12:43:49 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@12 -- # local i 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:06.774 [2024-10-01 12:43:49.243341] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:24:06.774 /dev/nbd0 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:06.774 12:43:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:06.774 12:43:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:06.774 12:43:49 -- common/autotest_common.sh@857 -- # local i 00:24:06.774 12:43:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:06.774 12:43:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:06.774 12:43:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:07.033 12:43:49 -- common/autotest_common.sh@861 -- # break 00:24:07.033 12:43:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:07.033 12:43:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:07.033 12:43:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:07.033 1+0 records in 00:24:07.033 1+0 records out 00:24:07.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325679 s, 12.6 MB/s 00:24:07.033 12:43:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:07.033 12:43:49 -- common/autotest_common.sh@874 -- # size=4096 00:24:07.033 12:43:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:07.033 12:43:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:07.033 12:43:49 -- common/autotest_common.sh@877 -- # return 0 00:24:07.033 12:43:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:07.033 12:43:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:07.033 12:43:49 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:24:07.033 12:43:49 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:24:07.033 12:43:49 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:12.308 65536+0 records in 00:24:12.308 65536+0 records out 00:24:12.308 33554432 bytes (34 MB, 32 MiB) copied, 4.50286 s, 7.5 MB/s 00:24:12.308 12:43:53 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:12.308 12:43:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:12.308 12:43:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:12.308 12:43:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:12.308 12:43:53 -- bdev/nbd_common.sh@51 -- # local i 00:24:12.308 12:43:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:12.308 12:43:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:12.308 [2024-10-01 12:43:54.037638] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.308 12:43:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:12.308 12:43:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:12.308 12:43:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:12.308 12:43:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:12.308 12:43:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:12.308 12:43:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:12.308 12:43:54 -- bdev/nbd_common.sh@41 -- # break 00:24:12.308 12:43:54 -- bdev/nbd_common.sh@45 -- # return 0 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:12.308 [2024-10-01 12:43:54.221010] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:12.308 "name": "raid_bdev1", 00:24:12.308 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:12.308 "strip_size_kb": 0, 00:24:12.308 "state": "online", 00:24:12.308 "raid_level": "raid1", 00:24:12.308 "superblock": false, 00:24:12.308 "num_base_bdevs": 4, 00:24:12.308 "num_base_bdevs_discovered": 3, 00:24:12.308 "num_base_bdevs_operational": 3, 00:24:12.308 "base_bdevs_list": [ 00:24:12.308 { 00:24:12.308 "name": null, 00:24:12.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.308 "is_configured": false, 00:24:12.308 "data_offset": 0, 00:24:12.308 "data_size": 65536 00:24:12.308 }, 00:24:12.308 { 00:24:12.308 "name": "BaseBdev2", 00:24:12.308 "uuid": "42d8e6db-08f0-4b31-a62b-5ec1fc15e805", 00:24:12.308 "is_configured": true, 00:24:12.308 "data_offset": 0, 00:24:12.308 "data_size": 65536 00:24:12.308 }, 00:24:12.308 { 00:24:12.308 "name": "BaseBdev3", 00:24:12.308 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:12.308 "is_configured": true, 00:24:12.308 "data_offset": 0, 00:24:12.308 "data_size": 65536 00:24:12.308 }, 00:24:12.308 { 00:24:12.308 "name": "BaseBdev4", 00:24:12.308 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:12.308 "is_configured": true, 00:24:12.308 "data_offset": 0, 00:24:12.308 "data_size": 65536 00:24:12.308 } 00:24:12.308 ] 00:24:12.308 }' 00:24:12.308 12:43:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:12.308 12:43:54 -- common/autotest_common.sh@10 -- # set +x 00:24:12.568 12:43:54 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:12.827 [2024-10-01 12:43:55.123852] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:12.827 [2024-10-01 12:43:55.123917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:12.827 [2024-10-01 12:43:55.138758] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:24:12.827 [2024-10-01 12:43:55.140867] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:12.827 12:43:55 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:13.765 12:43:56 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.765 12:43:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:13.765 12:43:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:13.765 12:43:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:13.765 12:43:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:13.765 12:43:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.765 12:43:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.025 12:43:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:14.025 "name": "raid_bdev1", 00:24:14.025 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:14.025 "strip_size_kb": 0, 00:24:14.025 "state": "online", 00:24:14.025 "raid_level": "raid1", 00:24:14.025 "superblock": false, 00:24:14.025 "num_base_bdevs": 4, 00:24:14.025 "num_base_bdevs_discovered": 4, 00:24:14.025 "num_base_bdevs_operational": 4, 00:24:14.025 "process": { 00:24:14.025 "type": "rebuild", 00:24:14.025 "target": "spare", 00:24:14.025 "progress": { 00:24:14.025 "blocks": 22528, 00:24:14.025 "percent": 34 00:24:14.025 } 00:24:14.025 }, 00:24:14.025 "base_bdevs_list": [ 00:24:14.025 { 00:24:14.025 "name": "spare", 00:24:14.025 "uuid": "44c35dcb-c370-511d-8275-0f6387514f71", 00:24:14.025 "is_configured": true, 00:24:14.025 "data_offset": 0, 00:24:14.025 "data_size": 65536 00:24:14.025 }, 00:24:14.025 { 00:24:14.025 "name": "BaseBdev2", 00:24:14.025 "uuid": "42d8e6db-08f0-4b31-a62b-5ec1fc15e805", 00:24:14.025 "is_configured": true, 00:24:14.025 "data_offset": 0, 00:24:14.025 "data_size": 65536 00:24:14.025 }, 00:24:14.025 { 00:24:14.025 "name": "BaseBdev3", 00:24:14.025 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:14.025 "is_configured": true, 00:24:14.025 "data_offset": 0, 00:24:14.025 "data_size": 65536 00:24:14.025 }, 00:24:14.025 { 00:24:14.025 "name": "BaseBdev4", 00:24:14.025 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:14.025 "is_configured": true, 00:24:14.025 "data_offset": 0, 00:24:14.025 "data_size": 65536 00:24:14.025 } 00:24:14.025 ] 00:24:14.025 }' 00:24:14.025 12:43:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:14.025 12:43:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.025 12:43:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:14.025 12:43:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.025 12:43:56 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:14.285 [2024-10-01 12:43:56.588978] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:14.285 [2024-10-01 12:43:56.648755] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:14.285 [2024-10-01 12:43:56.648881] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.285 12:43:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.545 12:43:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:14.545 "name": "raid_bdev1", 00:24:14.545 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:14.545 "strip_size_kb": 0, 00:24:14.545 "state": "online", 00:24:14.545 "raid_level": "raid1", 00:24:14.545 "superblock": false, 00:24:14.545 "num_base_bdevs": 4, 00:24:14.545 "num_base_bdevs_discovered": 3, 00:24:14.545 "num_base_bdevs_operational": 3, 00:24:14.545 "base_bdevs_list": [ 00:24:14.545 { 00:24:14.545 "name": null, 00:24:14.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.545 "is_configured": false, 00:24:14.545 "data_offset": 0, 00:24:14.545 "data_size": 65536 00:24:14.545 }, 00:24:14.545 { 00:24:14.545 "name": "BaseBdev2", 00:24:14.545 "uuid": "42d8e6db-08f0-4b31-a62b-5ec1fc15e805", 00:24:14.545 "is_configured": true, 00:24:14.545 "data_offset": 0, 00:24:14.545 "data_size": 65536 00:24:14.545 }, 00:24:14.545 { 00:24:14.545 "name": "BaseBdev3", 00:24:14.545 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:14.545 "is_configured": true, 00:24:14.545 "data_offset": 0, 00:24:14.545 "data_size": 65536 00:24:14.545 }, 00:24:14.545 { 00:24:14.545 "name": "BaseBdev4", 00:24:14.545 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:14.545 "is_configured": true, 00:24:14.545 "data_offset": 0, 00:24:14.545 "data_size": 65536 00:24:14.545 } 00:24:14.545 ] 00:24:14.545 }' 00:24:14.545 12:43:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:14.545 12:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:15.115 12:43:57 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:15.115 12:43:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:15.115 12:43:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:15.115 12:43:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:15.115 12:43:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:15.115 12:43:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.115 12:43:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.115 12:43:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:15.115 "name": "raid_bdev1", 00:24:15.115 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:15.115 "strip_size_kb": 0, 00:24:15.115 "state": "online", 00:24:15.115 "raid_level": "raid1", 00:24:15.115 "superblock": false, 00:24:15.115 "num_base_bdevs": 4, 00:24:15.115 "num_base_bdevs_discovered": 3, 00:24:15.115 "num_base_bdevs_operational": 3, 00:24:15.115 "base_bdevs_list": [ 00:24:15.115 { 00:24:15.115 "name": null, 00:24:15.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.115 "is_configured": false, 00:24:15.115 "data_offset": 0, 00:24:15.115 "data_size": 65536 00:24:15.115 }, 00:24:15.115 { 00:24:15.115 "name": "BaseBdev2", 00:24:15.115 "uuid": "42d8e6db-08f0-4b31-a62b-5ec1fc15e805", 00:24:15.115 "is_configured": true, 00:24:15.115 "data_offset": 0, 00:24:15.115 "data_size": 65536 00:24:15.115 }, 00:24:15.115 { 00:24:15.115 "name": "BaseBdev3", 00:24:15.115 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:15.115 "is_configured": true, 00:24:15.115 "data_offset": 0, 00:24:15.115 "data_size": 65536 00:24:15.115 }, 00:24:15.115 { 00:24:15.115 "name": "BaseBdev4", 00:24:15.115 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:15.115 "is_configured": true, 00:24:15.115 "data_offset": 0, 00:24:15.115 "data_size": 65536 00:24:15.115 } 00:24:15.115 ] 00:24:15.115 }' 00:24:15.115 12:43:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:15.374 12:43:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:15.374 12:43:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:15.374 12:43:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:15.374 12:43:57 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:15.374 [2024-10-01 12:43:57.891141] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:15.374 [2024-10-01 12:43:57.891191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:15.633 [2024-10-01 12:43:57.907374] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890 00:24:15.633 [2024-10-01 12:43:57.909322] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:15.633 12:43:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:16.570 12:43:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.570 12:43:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:16.570 12:43:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:16.570 12:43:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:16.570 12:43:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:16.570 12:43:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.570 12:43:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.829 12:43:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:16.829 "name": "raid_bdev1", 00:24:16.829 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:16.829 "strip_size_kb": 0, 00:24:16.829 "state": "online", 00:24:16.829 "raid_level": "raid1", 00:24:16.829 "superblock": false, 00:24:16.829 "num_base_bdevs": 4, 00:24:16.829 "num_base_bdevs_discovered": 4, 00:24:16.829 "num_base_bdevs_operational": 4, 00:24:16.829 "process": { 00:24:16.829 "type": "rebuild", 00:24:16.829 "target": "spare", 00:24:16.829 "progress": { 00:24:16.829 "blocks": 24576, 00:24:16.829 "percent": 37 00:24:16.829 } 00:24:16.829 }, 00:24:16.829 "base_bdevs_list": [ 00:24:16.829 { 00:24:16.829 "name": "spare", 00:24:16.829 "uuid": "44c35dcb-c370-511d-8275-0f6387514f71", 00:24:16.829 "is_configured": true, 00:24:16.829 "data_offset": 0, 00:24:16.829 "data_size": 65536 00:24:16.829 }, 00:24:16.829 { 00:24:16.829 "name": "BaseBdev2", 00:24:16.829 "uuid": "42d8e6db-08f0-4b31-a62b-5ec1fc15e805", 00:24:16.829 "is_configured": true, 00:24:16.829 "data_offset": 0, 00:24:16.829 "data_size": 65536 00:24:16.829 }, 00:24:16.829 { 00:24:16.829 "name": "BaseBdev3", 00:24:16.829 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:16.829 "is_configured": true, 00:24:16.829 "data_offset": 0, 00:24:16.829 "data_size": 65536 00:24:16.829 }, 00:24:16.829 { 00:24:16.829 "name": "BaseBdev4", 00:24:16.829 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:16.829 "is_configured": true, 00:24:16.829 "data_offset": 0, 00:24:16.829 "data_size": 65536 00:24:16.829 } 00:24:16.829 ] 00:24:16.829 }' 00:24:16.829 12:43:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:16.829 12:43:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.829 12:43:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:16.829 12:43:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.829 12:43:59 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:16.829 12:43:59 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:16.829 12:43:59 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:16.829 12:43:59 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:24:16.829 12:43:59 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:17.088 [2024-10-01 12:43:59.442661] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:17.088 [2024-10-01 12:43:59.516802] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09890 00:24:17.088 12:43:59 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:24:17.088 12:43:59 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:24:17.088 12:43:59 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.088 12:43:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:17.088 12:43:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:17.088 12:43:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:17.088 12:43:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:17.089 12:43:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.089 12:43:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.347 12:43:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:17.347 "name": "raid_bdev1", 00:24:17.347 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:17.347 "strip_size_kb": 0, 00:24:17.347 "state": "online", 00:24:17.347 "raid_level": "raid1", 00:24:17.347 "superblock": false, 00:24:17.347 "num_base_bdevs": 4, 00:24:17.347 "num_base_bdevs_discovered": 3, 00:24:17.347 "num_base_bdevs_operational": 3, 00:24:17.347 "process": { 00:24:17.347 "type": "rebuild", 00:24:17.347 "target": "spare", 00:24:17.347 "progress": { 00:24:17.347 "blocks": 34816, 00:24:17.347 "percent": 53 00:24:17.347 } 00:24:17.347 }, 00:24:17.347 "base_bdevs_list": [ 00:24:17.347 { 00:24:17.347 "name": "spare", 00:24:17.347 "uuid": "44c35dcb-c370-511d-8275-0f6387514f71", 00:24:17.347 "is_configured": true, 00:24:17.347 "data_offset": 0, 00:24:17.347 "data_size": 65536 00:24:17.347 }, 00:24:17.347 { 00:24:17.347 "name": null, 00:24:17.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.347 "is_configured": false, 00:24:17.347 "data_offset": 0, 00:24:17.347 "data_size": 65536 00:24:17.347 }, 00:24:17.347 { 00:24:17.347 "name": "BaseBdev3", 00:24:17.347 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:17.348 "is_configured": true, 00:24:17.348 "data_offset": 0, 00:24:17.348 "data_size": 65536 00:24:17.348 }, 00:24:17.348 { 00:24:17.348 "name": "BaseBdev4", 00:24:17.348 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:17.348 "is_configured": true, 00:24:17.348 "data_offset": 0, 00:24:17.348 "data_size": 65536 00:24:17.348 } 00:24:17.348 ] 00:24:17.348 }' 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@657 -- # local timeout=426 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.348 12:43:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.605 12:43:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:17.605 "name": "raid_bdev1", 00:24:17.605 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:17.605 "strip_size_kb": 0, 00:24:17.605 "state": "online", 00:24:17.605 "raid_level": "raid1", 00:24:17.605 "superblock": false, 00:24:17.605 "num_base_bdevs": 4, 00:24:17.605 "num_base_bdevs_discovered": 3, 00:24:17.605 "num_base_bdevs_operational": 3, 00:24:17.605 "process": { 00:24:17.605 "type": "rebuild", 00:24:17.605 "target": "spare", 00:24:17.605 "progress": { 00:24:17.605 "blocks": 40960, 00:24:17.605 "percent": 62 00:24:17.605 } 00:24:17.605 }, 00:24:17.605 "base_bdevs_list": [ 00:24:17.605 { 00:24:17.605 "name": "spare", 00:24:17.605 "uuid": "44c35dcb-c370-511d-8275-0f6387514f71", 00:24:17.605 "is_configured": true, 00:24:17.605 "data_offset": 0, 00:24:17.605 "data_size": 65536 00:24:17.605 }, 00:24:17.605 { 00:24:17.605 "name": null, 00:24:17.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.605 "is_configured": false, 00:24:17.605 "data_offset": 0, 00:24:17.605 "data_size": 65536 00:24:17.605 }, 00:24:17.605 { 00:24:17.605 "name": "BaseBdev3", 00:24:17.605 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:17.605 "is_configured": true, 00:24:17.605 "data_offset": 0, 00:24:17.605 "data_size": 65536 00:24:17.605 }, 00:24:17.605 { 00:24:17.605 "name": "BaseBdev4", 00:24:17.605 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:17.605 "is_configured": true, 00:24:17.605 "data_offset": 0, 00:24:17.605 "data_size": 65536 00:24:17.605 } 00:24:17.605 ] 00:24:17.605 }' 00:24:17.605 12:43:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.605 12:44:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:17.605 12:44:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:17.605 12:44:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:17.605 12:44:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:18.980 12:44:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.981 [2024-10-01 12:44:01.122151] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:18.981 [2024-10-01 12:44:01.122215] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:18.981 [2024-10-01 12:44:01.122275] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:18.981 "name": "raid_bdev1", 00:24:18.981 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:18.981 "strip_size_kb": 0, 00:24:18.981 "state": "online", 00:24:18.981 "raid_level": "raid1", 00:24:18.981 "superblock": false, 00:24:18.981 "num_base_bdevs": 4, 00:24:18.981 "num_base_bdevs_discovered": 3, 00:24:18.981 "num_base_bdevs_operational": 3, 00:24:18.981 "base_bdevs_list": [ 00:24:18.981 { 00:24:18.981 "name": "spare", 00:24:18.981 "uuid": "44c35dcb-c370-511d-8275-0f6387514f71", 00:24:18.981 "is_configured": true, 00:24:18.981 "data_offset": 0, 00:24:18.981 "data_size": 65536 00:24:18.981 }, 00:24:18.981 { 00:24:18.981 "name": null, 00:24:18.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.981 "is_configured": false, 00:24:18.981 "data_offset": 0, 00:24:18.981 "data_size": 65536 00:24:18.981 }, 00:24:18.981 { 00:24:18.981 "name": "BaseBdev3", 00:24:18.981 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:18.981 "is_configured": true, 00:24:18.981 "data_offset": 0, 00:24:18.981 "data_size": 65536 00:24:18.981 }, 00:24:18.981 { 00:24:18.981 "name": "BaseBdev4", 00:24:18.981 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:18.981 "is_configured": true, 00:24:18.981 "data_offset": 0, 00:24:18.981 "data_size": 65536 00:24:18.981 } 00:24:18.981 ] 00:24:18.981 }' 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@660 -- # break 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.981 12:44:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:19.240 "name": "raid_bdev1", 00:24:19.240 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:19.240 "strip_size_kb": 0, 00:24:19.240 "state": "online", 00:24:19.240 "raid_level": "raid1", 00:24:19.240 "superblock": false, 00:24:19.240 "num_base_bdevs": 4, 00:24:19.240 "num_base_bdevs_discovered": 3, 00:24:19.240 "num_base_bdevs_operational": 3, 00:24:19.240 "base_bdevs_list": [ 00:24:19.240 { 00:24:19.240 "name": "spare", 00:24:19.240 "uuid": "44c35dcb-c370-511d-8275-0f6387514f71", 00:24:19.240 "is_configured": true, 00:24:19.240 "data_offset": 0, 00:24:19.240 "data_size": 65536 00:24:19.240 }, 00:24:19.240 { 00:24:19.240 "name": null, 00:24:19.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.240 "is_configured": false, 00:24:19.240 "data_offset": 0, 00:24:19.240 "data_size": 65536 00:24:19.240 }, 00:24:19.240 { 00:24:19.240 "name": "BaseBdev3", 00:24:19.240 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:19.240 "is_configured": true, 00:24:19.240 "data_offset": 0, 00:24:19.240 "data_size": 65536 00:24:19.240 }, 00:24:19.240 { 00:24:19.240 "name": "BaseBdev4", 00:24:19.240 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:19.240 "is_configured": true, 00:24:19.240 "data_offset": 0, 00:24:19.240 "data_size": 65536 00:24:19.240 } 00:24:19.240 ] 00:24:19.240 }' 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:19.240 12:44:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:19.241 12:44:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:19.241 12:44:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.241 12:44:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.499 12:44:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.499 "name": "raid_bdev1", 00:24:19.499 "uuid": "b4c981f8-383f-4bf6-9976-61ebda2512d3", 00:24:19.499 "strip_size_kb": 0, 00:24:19.499 "state": "online", 00:24:19.499 "raid_level": "raid1", 00:24:19.499 "superblock": false, 00:24:19.499 "num_base_bdevs": 4, 00:24:19.499 "num_base_bdevs_discovered": 3, 00:24:19.499 "num_base_bdevs_operational": 3, 00:24:19.499 "base_bdevs_list": [ 00:24:19.499 { 00:24:19.499 "name": "spare", 00:24:19.499 "uuid": "44c35dcb-c370-511d-8275-0f6387514f71", 00:24:19.499 "is_configured": true, 00:24:19.499 "data_offset": 0, 00:24:19.499 "data_size": 65536 00:24:19.499 }, 00:24:19.499 { 00:24:19.499 "name": null, 00:24:19.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.499 "is_configured": false, 00:24:19.499 "data_offset": 0, 00:24:19.499 "data_size": 65536 00:24:19.499 }, 00:24:19.499 { 00:24:19.499 "name": "BaseBdev3", 00:24:19.499 "uuid": "328d1fe3-b6ce-4212-9e1e-938e912eb777", 00:24:19.499 "is_configured": true, 00:24:19.499 "data_offset": 0, 00:24:19.499 "data_size": 65536 00:24:19.499 }, 00:24:19.499 { 00:24:19.499 "name": "BaseBdev4", 00:24:19.499 "uuid": "84443cb6-be54-4a3c-89cd-19f811cb4738", 00:24:19.499 "is_configured": true, 00:24:19.499 "data_offset": 0, 00:24:19.499 "data_size": 65536 00:24:19.499 } 00:24:19.499 ] 00:24:19.499 }' 00:24:19.499 12:44:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.499 12:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:20.067 12:44:02 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:20.067 [2024-10-01 12:44:02.528717] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:20.067 [2024-10-01 12:44:02.528748] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:20.067 [2024-10-01 12:44:02.528837] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:20.067 [2024-10-01 12:44:02.528906] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:20.067 [2024-10-01 12:44:02.528916] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:24:20.067 12:44:02 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.067 12:44:02 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:20.326 12:44:02 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:20.326 12:44:02 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:20.326 12:44:02 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:20.326 12:44:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:20.326 12:44:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:20.326 12:44:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:20.326 12:44:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:20.326 12:44:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:20.326 12:44:02 -- bdev/nbd_common.sh@12 -- # local i 00:24:20.326 12:44:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:20.326 12:44:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:20.326 12:44:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:20.586 /dev/nbd0 00:24:20.586 12:44:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:20.586 12:44:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:20.586 12:44:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:20.586 12:44:02 -- common/autotest_common.sh@857 -- # local i 00:24:20.586 12:44:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:20.586 12:44:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:20.586 12:44:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:20.586 12:44:02 -- common/autotest_common.sh@861 -- # break 00:24:20.586 12:44:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:20.586 12:44:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:20.586 12:44:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:20.586 1+0 records in 00:24:20.586 1+0 records out 00:24:20.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582566 s, 7.0 MB/s 00:24:20.586 12:44:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:20.586 12:44:02 -- common/autotest_common.sh@874 -- # size=4096 00:24:20.586 12:44:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:20.586 12:44:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:20.586 12:44:02 -- common/autotest_common.sh@877 -- # return 0 00:24:20.586 12:44:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:20.586 12:44:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:20.586 12:44:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:20.845 /dev/nbd1 00:24:20.845 12:44:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:20.845 12:44:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:20.845 12:44:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:20.845 12:44:03 -- common/autotest_common.sh@857 -- # local i 00:24:20.845 12:44:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:20.845 12:44:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:20.845 12:44:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:20.845 12:44:03 -- common/autotest_common.sh@861 -- # break 00:24:20.845 12:44:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:20.845 12:44:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:20.845 12:44:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:20.845 1+0 records in 00:24:20.845 1+0 records out 00:24:20.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617208 s, 6.6 MB/s 00:24:20.845 12:44:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:20.845 12:44:03 -- common/autotest_common.sh@874 -- # size=4096 00:24:20.845 12:44:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:20.845 12:44:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:20.845 12:44:03 -- common/autotest_common.sh@877 -- # return 0 00:24:20.845 12:44:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:20.845 12:44:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:20.846 12:44:03 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:21.105 12:44:03 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@51 -- # local i 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@41 -- # break 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@45 -- # return 0 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:21.105 12:44:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:21.364 12:44:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:21.364 12:44:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:21.364 12:44:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:21.364 12:44:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:21.364 12:44:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:21.364 12:44:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:21.364 12:44:03 -- bdev/nbd_common.sh@41 -- # break 00:24:21.364 12:44:03 -- bdev/nbd_common.sh@45 -- # return 0 00:24:21.364 12:44:03 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:21.364 12:44:03 -- bdev/bdev_raid.sh@709 -- # killprocess 124896 00:24:21.365 12:44:03 -- common/autotest_common.sh@926 -- # '[' -z 124896 ']' 00:24:21.365 12:44:03 -- common/autotest_common.sh@930 -- # kill -0 124896 00:24:21.365 12:44:03 -- common/autotest_common.sh@931 -- # uname 00:24:21.365 12:44:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:21.365 12:44:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124896 00:24:21.365 killing process with pid 124896 00:24:21.365 Received shutdown signal, test time was about 60.000000 seconds 00:24:21.365 00:24:21.365 Latency(us) 00:24:21.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.365 =================================================================================================================== 00:24:21.365 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:21.365 12:44:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:21.365 12:44:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:21.365 12:44:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124896' 00:24:21.365 12:44:03 -- common/autotest_common.sh@945 -- # kill 124896 00:24:21.365 [2024-10-01 12:44:03.851785] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:21.365 12:44:03 -- common/autotest_common.sh@950 -- # wait 124896 00:24:21.954 [2024-10-01 12:44:04.343615] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:23.334 ************************************ 00:24:23.334 END TEST raid_rebuild_test 00:24:23.334 ************************************ 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:23.334 00:24:23.334 real 0m20.287s 00:24:23.334 user 0m26.757s 00:24:23.334 sys 0m3.767s 00:24:23.334 12:44:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:23.334 12:44:05 -- common/autotest_common.sh@10 -- # set +x 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:24:23.334 12:44:05 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:23.334 12:44:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:23.334 12:44:05 -- common/autotest_common.sh@10 -- # set +x 00:24:23.334 ************************************ 00:24:23.334 START TEST raid_rebuild_test_sb 00:24:23.334 ************************************ 00:24:23.334 12:44:05 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@544 -- # raid_pid=125489 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125489 /var/tmp/spdk-raid.sock 00:24:23.334 12:44:05 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:23.334 12:44:05 -- common/autotest_common.sh@819 -- # '[' -z 125489 ']' 00:24:23.334 12:44:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:23.334 12:44:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:23.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:23.334 12:44:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:23.334 12:44:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:23.334 12:44:05 -- common/autotest_common.sh@10 -- # set +x 00:24:23.334 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:23.334 Zero copy mechanism will not be used. 00:24:23.334 [2024-10-01 12:44:05.826693] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:23.334 [2024-10-01 12:44:05.826829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125489 ] 00:24:23.594 [2024-10-01 12:44:05.996079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.852 [2024-10-01 12:44:06.191339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.110 [2024-10-01 12:44:06.422860] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:24.110 12:44:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:24.110 12:44:06 -- common/autotest_common.sh@852 -- # return 0 00:24:24.110 12:44:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:24.110 12:44:06 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:24.110 12:44:06 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:24.369 BaseBdev1_malloc 00:24:24.369 12:44:06 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:24.629 [2024-10-01 12:44:07.017800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:24.629 [2024-10-01 12:44:07.017904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:24.629 [2024-10-01 12:44:07.017934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:24:24.629 [2024-10-01 12:44:07.017976] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:24.629 [2024-10-01 12:44:07.020400] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:24.629 [2024-10-01 12:44:07.020456] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:24.629 BaseBdev1 00:24:24.629 12:44:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:24.629 12:44:07 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:24.629 12:44:07 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:24.888 BaseBdev2_malloc 00:24:24.888 12:44:07 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:25.147 [2024-10-01 12:44:07.446041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:25.147 [2024-10-01 12:44:07.446119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.147 [2024-10-01 12:44:07.446154] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:25.147 [2024-10-01 12:44:07.446199] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.147 [2024-10-01 12:44:07.448325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.147 [2024-10-01 12:44:07.448370] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:25.147 BaseBdev2 00:24:25.147 12:44:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:25.147 12:44:07 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:25.147 12:44:07 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:25.147 BaseBdev3_malloc 00:24:25.406 12:44:07 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:25.406 [2024-10-01 12:44:07.840816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:25.406 [2024-10-01 12:44:07.840878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.406 [2024-10-01 12:44:07.840909] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:25.406 [2024-10-01 12:44:07.840945] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.406 [2024-10-01 12:44:07.843042] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.406 [2024-10-01 12:44:07.843091] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:25.406 BaseBdev3 00:24:25.406 12:44:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:25.406 12:44:07 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:25.406 12:44:07 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:25.665 BaseBdev4_malloc 00:24:25.665 12:44:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:25.924 [2024-10-01 12:44:08.242086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:25.924 [2024-10-01 12:44:08.242154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.924 [2024-10-01 12:44:08.242190] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:24:25.924 [2024-10-01 12:44:08.242229] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.925 [2024-10-01 12:44:08.244555] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.925 [2024-10-01 12:44:08.244613] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:25.925 BaseBdev4 00:24:25.925 12:44:08 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:26.184 spare_malloc 00:24:26.184 12:44:08 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:26.184 spare_delay 00:24:26.184 12:44:08 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:26.443 [2024-10-01 12:44:08.812178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:26.443 [2024-10-01 12:44:08.812250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.443 [2024-10-01 12:44:08.812281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:26.443 [2024-10-01 12:44:08.812320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.443 [2024-10-01 12:44:08.814490] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.443 [2024-10-01 12:44:08.814548] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:26.443 spare 00:24:26.443 12:44:08 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:26.706 [2024-10-01 12:44:09.008107] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:26.706 [2024-10-01 12:44:09.010021] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:26.706 [2024-10-01 12:44:09.010106] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:26.706 [2024-10-01 12:44:09.010147] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:26.706 [2024-10-01 12:44:09.010317] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:24:26.706 [2024-10-01 12:44:09.010325] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:26.706 [2024-10-01 12:44:09.010487] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:26.706 [2024-10-01 12:44:09.010812] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:24:26.706 [2024-10-01 12:44:09.010832] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:24:26.706 [2024-10-01 12:44:09.010964] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:26.706 "name": "raid_bdev1", 00:24:26.706 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:26.706 "strip_size_kb": 0, 00:24:26.706 "state": "online", 00:24:26.706 "raid_level": "raid1", 00:24:26.706 "superblock": true, 00:24:26.706 "num_base_bdevs": 4, 00:24:26.706 "num_base_bdevs_discovered": 4, 00:24:26.706 "num_base_bdevs_operational": 4, 00:24:26.706 "base_bdevs_list": [ 00:24:26.706 { 00:24:26.706 "name": "BaseBdev1", 00:24:26.706 "uuid": "3921cbfb-1c0a-5bff-a169-3c5f3dc25dd1", 00:24:26.706 "is_configured": true, 00:24:26.706 "data_offset": 2048, 00:24:26.706 "data_size": 63488 00:24:26.706 }, 00:24:26.706 { 00:24:26.706 "name": "BaseBdev2", 00:24:26.706 "uuid": "29787307-2496-5eea-bcc0-54f12322d7ab", 00:24:26.706 "is_configured": true, 00:24:26.706 "data_offset": 2048, 00:24:26.706 "data_size": 63488 00:24:26.706 }, 00:24:26.706 { 00:24:26.706 "name": "BaseBdev3", 00:24:26.706 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:26.706 "is_configured": true, 00:24:26.706 "data_offset": 2048, 00:24:26.706 "data_size": 63488 00:24:26.706 }, 00:24:26.706 { 00:24:26.706 "name": "BaseBdev4", 00:24:26.706 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:26.706 "is_configured": true, 00:24:26.706 "data_offset": 2048, 00:24:26.706 "data_size": 63488 00:24:26.706 } 00:24:26.706 ] 00:24:26.706 }' 00:24:26.706 12:44:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:26.706 12:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:27.275 12:44:09 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:27.275 12:44:09 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:27.535 [2024-10-01 12:44:09.870976] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:27.535 12:44:09 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:24:27.535 12:44:09 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.535 12:44:09 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:27.794 12:44:10 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:27.794 12:44:10 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:27.794 12:44:10 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:27.794 12:44:10 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@12 -- # local i 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:27.794 [2024-10-01 12:44:10.242346] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:27.794 /dev/nbd0 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:27.794 12:44:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:27.794 12:44:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:27.794 12:44:10 -- common/autotest_common.sh@857 -- # local i 00:24:27.794 12:44:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:27.794 12:44:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:27.794 12:44:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:27.794 12:44:10 -- common/autotest_common.sh@861 -- # break 00:24:27.794 12:44:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:27.794 12:44:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:27.794 12:44:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:27.794 1+0 records in 00:24:27.794 1+0 records out 00:24:27.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359564 s, 11.4 MB/s 00:24:27.795 12:44:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.795 12:44:10 -- common/autotest_common.sh@874 -- # size=4096 00:24:27.795 12:44:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.795 12:44:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:27.795 12:44:10 -- common/autotest_common.sh@877 -- # return 0 00:24:27.795 12:44:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:27.795 12:44:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:27.795 12:44:10 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:24:27.795 12:44:10 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:24:27.795 12:44:10 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:33.070 63488+0 records in 00:24:33.070 63488+0 records out 00:24:33.070 32505856 bytes (33 MB, 31 MiB) copied, 5.19714 s, 6.3 MB/s 00:24:33.070 12:44:15 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:33.070 12:44:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:33.070 12:44:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:33.070 12:44:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:33.070 12:44:15 -- bdev/nbd_common.sh@51 -- # local i 00:24:33.070 12:44:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:33.070 12:44:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:33.329 [2024-10-01 12:44:15.709642] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.329 12:44:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:33.329 12:44:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:33.329 12:44:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:33.329 12:44:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:33.329 12:44:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.329 12:44:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:33.329 12:44:15 -- bdev/nbd_common.sh@41 -- # break 00:24:33.329 12:44:15 -- bdev/nbd_common.sh@45 -- # return 0 00:24:33.329 12:44:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:33.587 [2024-10-01 12:44:15.896957] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:33.587 12:44:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.588 12:44:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.588 12:44:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:33.588 "name": "raid_bdev1", 00:24:33.588 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:33.588 "strip_size_kb": 0, 00:24:33.588 "state": "online", 00:24:33.588 "raid_level": "raid1", 00:24:33.588 "superblock": true, 00:24:33.588 "num_base_bdevs": 4, 00:24:33.588 "num_base_bdevs_discovered": 3, 00:24:33.588 "num_base_bdevs_operational": 3, 00:24:33.588 "base_bdevs_list": [ 00:24:33.588 { 00:24:33.588 "name": null, 00:24:33.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.588 "is_configured": false, 00:24:33.588 "data_offset": 2048, 00:24:33.588 "data_size": 63488 00:24:33.588 }, 00:24:33.588 { 00:24:33.588 "name": "BaseBdev2", 00:24:33.588 "uuid": "29787307-2496-5eea-bcc0-54f12322d7ab", 00:24:33.588 "is_configured": true, 00:24:33.588 "data_offset": 2048, 00:24:33.588 "data_size": 63488 00:24:33.588 }, 00:24:33.588 { 00:24:33.588 "name": "BaseBdev3", 00:24:33.588 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:33.588 "is_configured": true, 00:24:33.588 "data_offset": 2048, 00:24:33.588 "data_size": 63488 00:24:33.588 }, 00:24:33.588 { 00:24:33.588 "name": "BaseBdev4", 00:24:33.588 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:33.588 "is_configured": true, 00:24:33.588 "data_offset": 2048, 00:24:33.588 "data_size": 63488 00:24:33.588 } 00:24:33.588 ] 00:24:33.588 }' 00:24:33.588 12:44:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:33.588 12:44:16 -- common/autotest_common.sh@10 -- # set +x 00:24:34.155 12:44:16 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:34.415 [2024-10-01 12:44:16.755682] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:34.415 [2024-10-01 12:44:16.755727] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:34.415 [2024-10-01 12:44:16.772270] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:24:34.415 [2024-10-01 12:44:16.774342] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:34.415 12:44:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:35.356 12:44:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:35.356 12:44:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:35.356 12:44:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:35.356 12:44:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:35.356 12:44:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:35.356 12:44:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.356 12:44:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.616 12:44:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:35.616 "name": "raid_bdev1", 00:24:35.616 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:35.616 "strip_size_kb": 0, 00:24:35.616 "state": "online", 00:24:35.616 "raid_level": "raid1", 00:24:35.616 "superblock": true, 00:24:35.616 "num_base_bdevs": 4, 00:24:35.616 "num_base_bdevs_discovered": 4, 00:24:35.616 "num_base_bdevs_operational": 4, 00:24:35.616 "process": { 00:24:35.616 "type": "rebuild", 00:24:35.616 "target": "spare", 00:24:35.616 "progress": { 00:24:35.616 "blocks": 22528, 00:24:35.616 "percent": 35 00:24:35.616 } 00:24:35.616 }, 00:24:35.616 "base_bdevs_list": [ 00:24:35.616 { 00:24:35.616 "name": "spare", 00:24:35.616 "uuid": "df87c487-5326-52bc-ba6c-939ddaf6cc78", 00:24:35.616 "is_configured": true, 00:24:35.616 "data_offset": 2048, 00:24:35.616 "data_size": 63488 00:24:35.616 }, 00:24:35.616 { 00:24:35.616 "name": "BaseBdev2", 00:24:35.616 "uuid": "29787307-2496-5eea-bcc0-54f12322d7ab", 00:24:35.616 "is_configured": true, 00:24:35.616 "data_offset": 2048, 00:24:35.616 "data_size": 63488 00:24:35.616 }, 00:24:35.616 { 00:24:35.616 "name": "BaseBdev3", 00:24:35.616 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:35.616 "is_configured": true, 00:24:35.616 "data_offset": 2048, 00:24:35.616 "data_size": 63488 00:24:35.616 }, 00:24:35.616 { 00:24:35.616 "name": "BaseBdev4", 00:24:35.616 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:35.616 "is_configured": true, 00:24:35.616 "data_offset": 2048, 00:24:35.616 "data_size": 63488 00:24:35.616 } 00:24:35.616 ] 00:24:35.616 }' 00:24:35.616 12:44:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:35.616 12:44:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.616 12:44:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:35.616 12:44:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.616 12:44:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:35.875 [2024-10-01 12:44:18.241897] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:35.875 [2024-10-01 12:44:18.282380] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:35.875 [2024-10-01 12:44:18.282561] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.875 12:44:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.134 12:44:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:36.134 "name": "raid_bdev1", 00:24:36.134 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:36.134 "strip_size_kb": 0, 00:24:36.134 "state": "online", 00:24:36.134 "raid_level": "raid1", 00:24:36.134 "superblock": true, 00:24:36.134 "num_base_bdevs": 4, 00:24:36.134 "num_base_bdevs_discovered": 3, 00:24:36.134 "num_base_bdevs_operational": 3, 00:24:36.134 "base_bdevs_list": [ 00:24:36.134 { 00:24:36.134 "name": null, 00:24:36.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.134 "is_configured": false, 00:24:36.134 "data_offset": 2048, 00:24:36.134 "data_size": 63488 00:24:36.134 }, 00:24:36.134 { 00:24:36.134 "name": "BaseBdev2", 00:24:36.134 "uuid": "29787307-2496-5eea-bcc0-54f12322d7ab", 00:24:36.134 "is_configured": true, 00:24:36.134 "data_offset": 2048, 00:24:36.134 "data_size": 63488 00:24:36.134 }, 00:24:36.134 { 00:24:36.134 "name": "BaseBdev3", 00:24:36.134 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:36.134 "is_configured": true, 00:24:36.134 "data_offset": 2048, 00:24:36.134 "data_size": 63488 00:24:36.134 }, 00:24:36.134 { 00:24:36.134 "name": "BaseBdev4", 00:24:36.134 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:36.134 "is_configured": true, 00:24:36.134 "data_offset": 2048, 00:24:36.134 "data_size": 63488 00:24:36.134 } 00:24:36.134 ] 00:24:36.134 }' 00:24:36.134 12:44:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:36.134 12:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:36.701 12:44:19 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:36.701 12:44:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:36.701 12:44:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:36.701 12:44:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:36.701 12:44:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:36.701 12:44:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.701 12:44:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.960 12:44:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:36.960 "name": "raid_bdev1", 00:24:36.960 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:36.960 "strip_size_kb": 0, 00:24:36.960 "state": "online", 00:24:36.960 "raid_level": "raid1", 00:24:36.960 "superblock": true, 00:24:36.960 "num_base_bdevs": 4, 00:24:36.960 "num_base_bdevs_discovered": 3, 00:24:36.960 "num_base_bdevs_operational": 3, 00:24:36.960 "base_bdevs_list": [ 00:24:36.960 { 00:24:36.960 "name": null, 00:24:36.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.960 "is_configured": false, 00:24:36.960 "data_offset": 2048, 00:24:36.960 "data_size": 63488 00:24:36.960 }, 00:24:36.960 { 00:24:36.960 "name": "BaseBdev2", 00:24:36.960 "uuid": "29787307-2496-5eea-bcc0-54f12322d7ab", 00:24:36.960 "is_configured": true, 00:24:36.960 "data_offset": 2048, 00:24:36.960 "data_size": 63488 00:24:36.960 }, 00:24:36.960 { 00:24:36.960 "name": "BaseBdev3", 00:24:36.960 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:36.960 "is_configured": true, 00:24:36.960 "data_offset": 2048, 00:24:36.960 "data_size": 63488 00:24:36.960 }, 00:24:36.960 { 00:24:36.960 "name": "BaseBdev4", 00:24:36.960 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:36.960 "is_configured": true, 00:24:36.960 "data_offset": 2048, 00:24:36.960 "data_size": 63488 00:24:36.960 } 00:24:36.960 ] 00:24:36.960 }' 00:24:36.960 12:44:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:36.960 12:44:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:36.960 12:44:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:36.960 12:44:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:36.960 12:44:19 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:37.218 [2024-10-01 12:44:19.540789] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:37.218 [2024-10-01 12:44:19.540953] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:37.218 [2024-10-01 12:44:19.557131] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:24:37.218 [2024-10-01 12:44:19.559053] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:37.218 12:44:19 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:38.156 12:44:20 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:38.156 12:44:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:38.156 12:44:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:38.156 12:44:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:38.156 12:44:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:38.156 12:44:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.156 12:44:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:38.416 "name": "raid_bdev1", 00:24:38.416 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:38.416 "strip_size_kb": 0, 00:24:38.416 "state": "online", 00:24:38.416 "raid_level": "raid1", 00:24:38.416 "superblock": true, 00:24:38.416 "num_base_bdevs": 4, 00:24:38.416 "num_base_bdevs_discovered": 4, 00:24:38.416 "num_base_bdevs_operational": 4, 00:24:38.416 "process": { 00:24:38.416 "type": "rebuild", 00:24:38.416 "target": "spare", 00:24:38.416 "progress": { 00:24:38.416 "blocks": 22528, 00:24:38.416 "percent": 35 00:24:38.416 } 00:24:38.416 }, 00:24:38.416 "base_bdevs_list": [ 00:24:38.416 { 00:24:38.416 "name": "spare", 00:24:38.416 "uuid": "df87c487-5326-52bc-ba6c-939ddaf6cc78", 00:24:38.416 "is_configured": true, 00:24:38.416 "data_offset": 2048, 00:24:38.416 "data_size": 63488 00:24:38.416 }, 00:24:38.416 { 00:24:38.416 "name": "BaseBdev2", 00:24:38.416 "uuid": "29787307-2496-5eea-bcc0-54f12322d7ab", 00:24:38.416 "is_configured": true, 00:24:38.416 "data_offset": 2048, 00:24:38.416 "data_size": 63488 00:24:38.416 }, 00:24:38.416 { 00:24:38.416 "name": "BaseBdev3", 00:24:38.416 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:38.416 "is_configured": true, 00:24:38.416 "data_offset": 2048, 00:24:38.416 "data_size": 63488 00:24:38.416 }, 00:24:38.416 { 00:24:38.416 "name": "BaseBdev4", 00:24:38.416 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:38.416 "is_configured": true, 00:24:38.416 "data_offset": 2048, 00:24:38.416 "data_size": 63488 00:24:38.416 } 00:24:38.416 ] 00:24:38.416 }' 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:38.416 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:24:38.416 12:44:20 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:38.675 [2024-10-01 12:44:21.035082] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:38.675 [2024-10-01 12:44:21.066249] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:24:38.675 12:44:21 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:24:38.675 12:44:21 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:24:38.675 12:44:21 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:38.675 12:44:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:38.675 12:44:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:38.675 12:44:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:38.675 12:44:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:38.675 12:44:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.675 12:44:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.935 12:44:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:38.935 "name": "raid_bdev1", 00:24:38.935 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:38.935 "strip_size_kb": 0, 00:24:38.935 "state": "online", 00:24:38.935 "raid_level": "raid1", 00:24:38.935 "superblock": true, 00:24:38.935 "num_base_bdevs": 4, 00:24:38.935 "num_base_bdevs_discovered": 3, 00:24:38.935 "num_base_bdevs_operational": 3, 00:24:38.935 "process": { 00:24:38.935 "type": "rebuild", 00:24:38.935 "target": "spare", 00:24:38.935 "progress": { 00:24:38.935 "blocks": 34816, 00:24:38.935 "percent": 54 00:24:38.935 } 00:24:38.935 }, 00:24:38.935 "base_bdevs_list": [ 00:24:38.935 { 00:24:38.935 "name": "spare", 00:24:38.935 "uuid": "df87c487-5326-52bc-ba6c-939ddaf6cc78", 00:24:38.935 "is_configured": true, 00:24:38.935 "data_offset": 2048, 00:24:38.935 "data_size": 63488 00:24:38.935 }, 00:24:38.935 { 00:24:38.935 "name": null, 00:24:38.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.935 "is_configured": false, 00:24:38.935 "data_offset": 2048, 00:24:38.935 "data_size": 63488 00:24:38.935 }, 00:24:38.935 { 00:24:38.935 "name": "BaseBdev3", 00:24:38.935 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:38.935 "is_configured": true, 00:24:38.935 "data_offset": 2048, 00:24:38.935 "data_size": 63488 00:24:38.935 }, 00:24:38.935 { 00:24:38.935 "name": "BaseBdev4", 00:24:38.935 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:38.935 "is_configured": true, 00:24:38.935 "data_offset": 2048, 00:24:38.935 "data_size": 63488 00:24:38.935 } 00:24:38.935 ] 00:24:38.935 }' 00:24:38.935 12:44:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:38.935 12:44:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:38.935 12:44:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@657 -- # local timeout=448 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:39.196 "name": "raid_bdev1", 00:24:39.196 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:39.196 "strip_size_kb": 0, 00:24:39.196 "state": "online", 00:24:39.196 "raid_level": "raid1", 00:24:39.196 "superblock": true, 00:24:39.196 "num_base_bdevs": 4, 00:24:39.196 "num_base_bdevs_discovered": 3, 00:24:39.196 "num_base_bdevs_operational": 3, 00:24:39.196 "process": { 00:24:39.196 "type": "rebuild", 00:24:39.196 "target": "spare", 00:24:39.196 "progress": { 00:24:39.196 "blocks": 40960, 00:24:39.196 "percent": 64 00:24:39.196 } 00:24:39.196 }, 00:24:39.196 "base_bdevs_list": [ 00:24:39.196 { 00:24:39.196 "name": "spare", 00:24:39.196 "uuid": "df87c487-5326-52bc-ba6c-939ddaf6cc78", 00:24:39.196 "is_configured": true, 00:24:39.196 "data_offset": 2048, 00:24:39.196 "data_size": 63488 00:24:39.196 }, 00:24:39.196 { 00:24:39.196 "name": null, 00:24:39.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.196 "is_configured": false, 00:24:39.196 "data_offset": 2048, 00:24:39.196 "data_size": 63488 00:24:39.196 }, 00:24:39.196 { 00:24:39.196 "name": "BaseBdev3", 00:24:39.196 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:39.196 "is_configured": true, 00:24:39.196 "data_offset": 2048, 00:24:39.196 "data_size": 63488 00:24:39.196 }, 00:24:39.196 { 00:24:39.196 "name": "BaseBdev4", 00:24:39.196 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:39.196 "is_configured": true, 00:24:39.196 "data_offset": 2048, 00:24:39.196 "data_size": 63488 00:24:39.196 } 00:24:39.196 ] 00:24:39.196 }' 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.196 12:44:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:39.455 12:44:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.455 12:44:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:40.457 [2024-10-01 12:44:22.673994] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:40.457 [2024-10-01 12:44:22.674192] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:40.457 [2024-10-01 12:44:22.674418] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.457 12:44:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:40.457 12:44:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.457 12:44:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:40.457 12:44:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:40.457 12:44:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:40.457 12:44:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:40.457 12:44:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.457 12:44:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.457 12:44:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:40.457 "name": "raid_bdev1", 00:24:40.457 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:40.457 "strip_size_kb": 0, 00:24:40.457 "state": "online", 00:24:40.457 "raid_level": "raid1", 00:24:40.457 "superblock": true, 00:24:40.457 "num_base_bdevs": 4, 00:24:40.457 "num_base_bdevs_discovered": 3, 00:24:40.457 "num_base_bdevs_operational": 3, 00:24:40.457 "base_bdevs_list": [ 00:24:40.457 { 00:24:40.457 "name": "spare", 00:24:40.457 "uuid": "df87c487-5326-52bc-ba6c-939ddaf6cc78", 00:24:40.457 "is_configured": true, 00:24:40.457 "data_offset": 2048, 00:24:40.457 "data_size": 63488 00:24:40.457 }, 00:24:40.457 { 00:24:40.457 "name": null, 00:24:40.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.457 "is_configured": false, 00:24:40.457 "data_offset": 2048, 00:24:40.457 "data_size": 63488 00:24:40.457 }, 00:24:40.457 { 00:24:40.457 "name": "BaseBdev3", 00:24:40.457 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:40.457 "is_configured": true, 00:24:40.457 "data_offset": 2048, 00:24:40.457 "data_size": 63488 00:24:40.457 }, 00:24:40.457 { 00:24:40.457 "name": "BaseBdev4", 00:24:40.457 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:40.457 "is_configured": true, 00:24:40.457 "data_offset": 2048, 00:24:40.457 "data_size": 63488 00:24:40.457 } 00:24:40.457 ] 00:24:40.457 }' 00:24:40.457 12:44:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:40.716 12:44:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:40.716 12:44:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@660 -- # break 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:40.716 "name": "raid_bdev1", 00:24:40.716 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:40.716 "strip_size_kb": 0, 00:24:40.716 "state": "online", 00:24:40.716 "raid_level": "raid1", 00:24:40.716 "superblock": true, 00:24:40.716 "num_base_bdevs": 4, 00:24:40.716 "num_base_bdevs_discovered": 3, 00:24:40.716 "num_base_bdevs_operational": 3, 00:24:40.716 "base_bdevs_list": [ 00:24:40.716 { 00:24:40.716 "name": "spare", 00:24:40.716 "uuid": "df87c487-5326-52bc-ba6c-939ddaf6cc78", 00:24:40.716 "is_configured": true, 00:24:40.716 "data_offset": 2048, 00:24:40.716 "data_size": 63488 00:24:40.716 }, 00:24:40.716 { 00:24:40.716 "name": null, 00:24:40.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.716 "is_configured": false, 00:24:40.716 "data_offset": 2048, 00:24:40.716 "data_size": 63488 00:24:40.716 }, 00:24:40.716 { 00:24:40.716 "name": "BaseBdev3", 00:24:40.716 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:40.716 "is_configured": true, 00:24:40.716 "data_offset": 2048, 00:24:40.716 "data_size": 63488 00:24:40.716 }, 00:24:40.716 { 00:24:40.716 "name": "BaseBdev4", 00:24:40.716 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:40.716 "is_configured": true, 00:24:40.716 "data_offset": 2048, 00:24:40.716 "data_size": 63488 00:24:40.716 } 00:24:40.716 ] 00:24:40.716 }' 00:24:40.716 12:44:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.975 12:44:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:40.975 "name": "raid_bdev1", 00:24:40.975 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:40.975 "strip_size_kb": 0, 00:24:40.975 "state": "online", 00:24:40.975 "raid_level": "raid1", 00:24:40.976 "superblock": true, 00:24:40.976 "num_base_bdevs": 4, 00:24:40.976 "num_base_bdevs_discovered": 3, 00:24:40.976 "num_base_bdevs_operational": 3, 00:24:40.976 "base_bdevs_list": [ 00:24:40.976 { 00:24:40.976 "name": "spare", 00:24:40.976 "uuid": "df87c487-5326-52bc-ba6c-939ddaf6cc78", 00:24:40.976 "is_configured": true, 00:24:40.976 "data_offset": 2048, 00:24:40.976 "data_size": 63488 00:24:40.976 }, 00:24:40.976 { 00:24:40.976 "name": null, 00:24:40.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.976 "is_configured": false, 00:24:40.976 "data_offset": 2048, 00:24:40.976 "data_size": 63488 00:24:40.976 }, 00:24:40.976 { 00:24:40.976 "name": "BaseBdev3", 00:24:40.976 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:40.976 "is_configured": true, 00:24:40.976 "data_offset": 2048, 00:24:40.976 "data_size": 63488 00:24:40.976 }, 00:24:40.976 { 00:24:40.976 "name": "BaseBdev4", 00:24:40.976 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:40.976 "is_configured": true, 00:24:40.976 "data_offset": 2048, 00:24:40.976 "data_size": 63488 00:24:40.976 } 00:24:40.976 ] 00:24:40.976 }' 00:24:40.976 12:44:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:40.976 12:44:23 -- common/autotest_common.sh@10 -- # set +x 00:24:41.543 12:44:24 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:41.801 [2024-10-01 12:44:24.182920] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:41.801 [2024-10-01 12:44:24.183048] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:41.801 [2024-10-01 12:44:24.183259] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.801 [2024-10-01 12:44:24.183374] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.801 [2024-10-01 12:44:24.183582] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:24:41.801 12:44:24 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.801 12:44:24 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:42.059 12:44:24 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:42.059 12:44:24 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:42.059 12:44:24 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:42.059 12:44:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:42.059 12:44:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:42.059 12:44:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:42.059 12:44:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:42.059 12:44:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:42.059 12:44:24 -- bdev/nbd_common.sh@12 -- # local i 00:24:42.059 12:44:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:42.059 12:44:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:42.059 12:44:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:42.059 /dev/nbd0 00:24:42.318 12:44:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:42.318 12:44:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:42.318 12:44:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:42.318 12:44:24 -- common/autotest_common.sh@857 -- # local i 00:24:42.318 12:44:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:42.318 12:44:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:42.318 12:44:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:42.318 12:44:24 -- common/autotest_common.sh@861 -- # break 00:24:42.318 12:44:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:42.318 12:44:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:42.318 12:44:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:42.318 1+0 records in 00:24:42.318 1+0 records out 00:24:42.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372682 s, 11.0 MB/s 00:24:42.318 12:44:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.318 12:44:24 -- common/autotest_common.sh@874 -- # size=4096 00:24:42.318 12:44:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.318 12:44:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:42.318 12:44:24 -- common/autotest_common.sh@877 -- # return 0 00:24:42.318 12:44:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:42.318 12:44:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:42.318 12:44:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:42.318 /dev/nbd1 00:24:42.318 12:44:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:42.318 12:44:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:42.318 12:44:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:42.318 12:44:24 -- common/autotest_common.sh@857 -- # local i 00:24:42.577 12:44:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:42.577 12:44:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:42.577 12:44:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:42.577 12:44:24 -- common/autotest_common.sh@861 -- # break 00:24:42.577 12:44:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:42.577 12:44:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:42.577 12:44:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:42.577 1+0 records in 00:24:42.577 1+0 records out 00:24:42.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618986 s, 6.6 MB/s 00:24:42.577 12:44:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.577 12:44:24 -- common/autotest_common.sh@874 -- # size=4096 00:24:42.577 12:44:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.577 12:44:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:42.577 12:44:24 -- common/autotest_common.sh@877 -- # return 0 00:24:42.577 12:44:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:42.577 12:44:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:42.577 12:44:24 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:42.577 12:44:25 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:42.577 12:44:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:42.577 12:44:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:42.577 12:44:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:42.577 12:44:25 -- bdev/nbd_common.sh@51 -- # local i 00:24:42.577 12:44:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:42.577 12:44:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:42.836 12:44:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:42.836 12:44:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:42.836 12:44:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:42.836 12:44:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:42.836 12:44:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:42.836 12:44:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:42.836 12:44:25 -- bdev/nbd_common.sh@41 -- # break 00:24:42.836 12:44:25 -- bdev/nbd_common.sh@45 -- # return 0 00:24:42.836 12:44:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:42.836 12:44:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:43.094 12:44:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:43.094 12:44:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:43.094 12:44:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:43.094 12:44:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:43.094 12:44:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:43.094 12:44:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:43.094 12:44:25 -- bdev/nbd_common.sh@41 -- # break 00:24:43.094 12:44:25 -- bdev/nbd_common.sh@45 -- # return 0 00:24:43.094 12:44:25 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:43.094 12:44:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:43.094 12:44:25 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:43.094 12:44:25 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:43.353 12:44:25 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:43.353 [2024-10-01 12:44:25.844818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:43.353 [2024-10-01 12:44:25.845017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.353 [2024-10-01 12:44:25.845088] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:43.353 [2024-10-01 12:44:25.845172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.353 [2024-10-01 12:44:25.847700] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.353 [2024-10-01 12:44:25.847856] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:43.353 [2024-10-01 12:44:25.848091] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:43.353 [2024-10-01 12:44:25.848219] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:43.353 BaseBdev1 00:24:43.353 12:44:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:43.353 12:44:25 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:24:43.353 12:44:25 -- bdev/bdev_raid.sh@696 -- # continue 00:24:43.353 12:44:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:43.353 12:44:25 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:43.353 12:44:25 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:43.611 12:44:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:43.869 [2024-10-01 12:44:26.192294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:43.869 [2024-10-01 12:44:26.192446] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.869 [2024-10-01 12:44:26.192507] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:43.869 [2024-10-01 12:44:26.192604] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.869 [2024-10-01 12:44:26.192991] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.869 [2024-10-01 12:44:26.193124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:43.869 [2024-10-01 12:44:26.193237] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:43.869 [2024-10-01 12:44:26.193324] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:24:43.869 [2024-10-01 12:44:26.193356] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:43.869 [2024-10-01 12:44:26.193430] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:24:43.869 [2024-10-01 12:44:26.193528] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:43.869 BaseBdev3 00:24:43.869 12:44:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:43.869 12:44:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:24:43.869 12:44:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:24:43.869 12:44:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:44.127 [2024-10-01 12:44:26.551779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:44.127 [2024-10-01 12:44:26.551983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.127 [2024-10-01 12:44:26.552046] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:44.127 [2024-10-01 12:44:26.552176] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.127 [2024-10-01 12:44:26.552602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.127 [2024-10-01 12:44:26.552750] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:44.127 [2024-10-01 12:44:26.552904] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:24:44.127 [2024-10-01 12:44:26.552986] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:44.127 BaseBdev4 00:24:44.127 12:44:26 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:44.385 12:44:26 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:44.385 [2024-10-01 12:44:26.915239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:44.385 [2024-10-01 12:44:26.915293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.385 [2024-10-01 12:44:26.915318] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:24:44.385 [2024-10-01 12:44:26.915345] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.385 [2024-10-01 12:44:26.915729] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.385 [2024-10-01 12:44:26.915770] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:44.385 [2024-10-01 12:44:26.915863] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:44.385 [2024-10-01 12:44:26.915897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:44.644 spare 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.644 12:44:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.644 [2024-10-01 12:44:27.015847] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:24:44.644 [2024-10-01 12:44:27.015866] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:44.644 [2024-10-01 12:44:27.015986] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:24:44.644 [2024-10-01 12:44:27.016332] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:24:44.644 [2024-10-01 12:44:27.016341] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:24:44.644 [2024-10-01 12:44:27.016456] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.644 12:44:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:44.644 "name": "raid_bdev1", 00:24:44.644 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:44.644 "strip_size_kb": 0, 00:24:44.644 "state": "online", 00:24:44.644 "raid_level": "raid1", 00:24:44.644 "superblock": true, 00:24:44.644 "num_base_bdevs": 4, 00:24:44.644 "num_base_bdevs_discovered": 3, 00:24:44.644 "num_base_bdevs_operational": 3, 00:24:44.644 "base_bdevs_list": [ 00:24:44.644 { 00:24:44.644 "name": "spare", 00:24:44.644 "uuid": "df87c487-5326-52bc-ba6c-939ddaf6cc78", 00:24:44.644 "is_configured": true, 00:24:44.644 "data_offset": 2048, 00:24:44.644 "data_size": 63488 00:24:44.644 }, 00:24:44.644 { 00:24:44.644 "name": null, 00:24:44.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.644 "is_configured": false, 00:24:44.644 "data_offset": 2048, 00:24:44.644 "data_size": 63488 00:24:44.644 }, 00:24:44.644 { 00:24:44.644 "name": "BaseBdev3", 00:24:44.644 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:44.644 "is_configured": true, 00:24:44.644 "data_offset": 2048, 00:24:44.644 "data_size": 63488 00:24:44.644 }, 00:24:44.644 { 00:24:44.644 "name": "BaseBdev4", 00:24:44.644 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:44.645 "is_configured": true, 00:24:44.645 "data_offset": 2048, 00:24:44.645 "data_size": 63488 00:24:44.645 } 00:24:44.645 ] 00:24:44.645 }' 00:24:44.645 12:44:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:44.645 12:44:27 -- common/autotest_common.sh@10 -- # set +x 00:24:45.213 12:44:27 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:45.213 12:44:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:45.213 12:44:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:45.213 12:44:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:45.213 12:44:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:45.213 12:44:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.213 12:44:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.472 12:44:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:45.472 "name": "raid_bdev1", 00:24:45.472 "uuid": "7b9f71d0-5e30-46fe-b5c4-18314287b454", 00:24:45.472 "strip_size_kb": 0, 00:24:45.472 "state": "online", 00:24:45.472 "raid_level": "raid1", 00:24:45.472 "superblock": true, 00:24:45.472 "num_base_bdevs": 4, 00:24:45.472 "num_base_bdevs_discovered": 3, 00:24:45.472 "num_base_bdevs_operational": 3, 00:24:45.472 "base_bdevs_list": [ 00:24:45.472 { 00:24:45.472 "name": "spare", 00:24:45.472 "uuid": "df87c487-5326-52bc-ba6c-939ddaf6cc78", 00:24:45.472 "is_configured": true, 00:24:45.472 "data_offset": 2048, 00:24:45.472 "data_size": 63488 00:24:45.472 }, 00:24:45.472 { 00:24:45.472 "name": null, 00:24:45.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.472 "is_configured": false, 00:24:45.472 "data_offset": 2048, 00:24:45.472 "data_size": 63488 00:24:45.472 }, 00:24:45.472 { 00:24:45.472 "name": "BaseBdev3", 00:24:45.472 "uuid": "50f84502-13d0-5f9e-ab95-8281c07744de", 00:24:45.472 "is_configured": true, 00:24:45.472 "data_offset": 2048, 00:24:45.472 "data_size": 63488 00:24:45.472 }, 00:24:45.472 { 00:24:45.472 "name": "BaseBdev4", 00:24:45.472 "uuid": "f1760683-59e3-5a07-b1e4-32c441a45294", 00:24:45.472 "is_configured": true, 00:24:45.472 "data_offset": 2048, 00:24:45.472 "data_size": 63488 00:24:45.472 } 00:24:45.472 ] 00:24:45.472 }' 00:24:45.472 12:44:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:45.472 12:44:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:45.472 12:44:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:45.472 12:44:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:45.472 12:44:27 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:45.472 12:44:27 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.731 12:44:28 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.731 12:44:28 -- bdev/bdev_raid.sh@709 -- # killprocess 125489 00:24:45.731 12:44:28 -- common/autotest_common.sh@926 -- # '[' -z 125489 ']' 00:24:45.731 12:44:28 -- common/autotest_common.sh@930 -- # kill -0 125489 00:24:45.731 12:44:28 -- common/autotest_common.sh@931 -- # uname 00:24:45.731 12:44:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:45.731 12:44:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125489 00:24:45.731 12:44:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:45.731 12:44:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:45.731 12:44:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125489' 00:24:45.731 killing process with pid 125489 00:24:45.731 Received shutdown signal, test time was about 60.000000 seconds 00:24:45.731 00:24:45.731 Latency(us) 00:24:45.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.731 =================================================================================================================== 00:24:45.731 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:45.731 12:44:28 -- common/autotest_common.sh@945 -- # kill 125489 00:24:45.731 [2024-10-01 12:44:28.100452] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:45.731 12:44:28 -- common/autotest_common.sh@950 -- # wait 125489 00:24:45.731 [2024-10-01 12:44:28.100543] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:45.731 [2024-10-01 12:44:28.100621] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:45.731 [2024-10-01 12:44:28.100631] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:24:46.299 [2024-10-01 12:44:28.623477] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:47.674 ************************************ 00:24:47.674 END TEST raid_rebuild_test_sb 00:24:47.674 ************************************ 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:47.674 00:24:47.674 real 0m24.356s 00:24:47.674 user 0m33.340s 00:24:47.674 sys 0m4.532s 00:24:47.674 12:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:47.674 12:44:30 -- common/autotest_common.sh@10 -- # set +x 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:24:47.674 12:44:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:47.674 12:44:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:47.674 12:44:30 -- common/autotest_common.sh@10 -- # set +x 00:24:47.674 ************************************ 00:24:47.674 START TEST raid_rebuild_test_io 00:24:47.674 ************************************ 00:24:47.674 12:44:30 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:47.674 12:44:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=126115 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:47.933 12:44:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126115 /var/tmp/spdk-raid.sock 00:24:47.933 12:44:30 -- common/autotest_common.sh@819 -- # '[' -z 126115 ']' 00:24:47.933 12:44:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:47.933 12:44:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:47.933 12:44:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:47.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:47.933 12:44:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:47.933 12:44:30 -- common/autotest_common.sh@10 -- # set +x 00:24:47.933 [2024-10-01 12:44:30.284891] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:47.933 [2024-10-01 12:44:30.285028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126115 ] 00:24:47.933 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:47.933 Zero copy mechanism will not be used. 00:24:47.933 [2024-10-01 12:44:30.454724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.192 [2024-10-01 12:44:30.673447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.450 [2024-10-01 12:44:30.932248] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.709 12:44:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:48.709 12:44:31 -- common/autotest_common.sh@852 -- # return 0 00:24:48.709 12:44:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:48.709 12:44:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:48.709 12:44:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:48.969 BaseBdev1 00:24:48.969 12:44:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:48.969 12:44:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:48.969 12:44:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:49.229 BaseBdev2 00:24:49.229 12:44:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:49.229 12:44:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:49.229 12:44:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:49.488 BaseBdev3 00:24:49.488 12:44:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:49.488 12:44:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:49.488 12:44:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:49.748 BaseBdev4 00:24:49.748 12:44:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:49.748 spare_malloc 00:24:50.008 12:44:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:50.008 spare_delay 00:24:50.008 12:44:32 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:50.267 [2024-10-01 12:44:32.666345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:50.267 [2024-10-01 12:44:32.666442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.267 [2024-10-01 12:44:32.666470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:24:50.267 [2024-10-01 12:44:32.666516] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.267 [2024-10-01 12:44:32.668925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.267 [2024-10-01 12:44:32.668972] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:50.267 spare 00:24:50.267 12:44:32 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:50.551 [2024-10-01 12:44:32.842168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:50.551 [2024-10-01 12:44:32.844186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:50.551 [2024-10-01 12:44:32.844228] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:50.551 [2024-10-01 12:44:32.844256] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:50.551 [2024-10-01 12:44:32.844316] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:24:50.551 [2024-10-01 12:44:32.844324] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:50.551 [2024-10-01 12:44:32.844432] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:50.551 [2024-10-01 12:44:32.844731] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:24:50.551 [2024-10-01 12:44:32.844740] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:24:50.551 [2024-10-01 12:44:32.844873] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.551 12:44:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.551 12:44:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:50.551 "name": "raid_bdev1", 00:24:50.551 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:24:50.551 "strip_size_kb": 0, 00:24:50.551 "state": "online", 00:24:50.551 "raid_level": "raid1", 00:24:50.551 "superblock": false, 00:24:50.551 "num_base_bdevs": 4, 00:24:50.551 "num_base_bdevs_discovered": 4, 00:24:50.551 "num_base_bdevs_operational": 4, 00:24:50.551 "base_bdevs_list": [ 00:24:50.551 { 00:24:50.551 "name": "BaseBdev1", 00:24:50.551 "uuid": "9a66744a-0bc9-4d5a-8e8c-b0294fe127bc", 00:24:50.551 "is_configured": true, 00:24:50.551 "data_offset": 0, 00:24:50.551 "data_size": 65536 00:24:50.551 }, 00:24:50.551 { 00:24:50.551 "name": "BaseBdev2", 00:24:50.551 "uuid": "e1be6eb8-80ea-47e8-bb9e-2893fb4af5d4", 00:24:50.551 "is_configured": true, 00:24:50.551 "data_offset": 0, 00:24:50.551 "data_size": 65536 00:24:50.551 }, 00:24:50.551 { 00:24:50.551 "name": "BaseBdev3", 00:24:50.551 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:24:50.551 "is_configured": true, 00:24:50.551 "data_offset": 0, 00:24:50.551 "data_size": 65536 00:24:50.551 }, 00:24:50.551 { 00:24:50.551 "name": "BaseBdev4", 00:24:50.551 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:24:50.551 "is_configured": true, 00:24:50.551 "data_offset": 0, 00:24:50.551 "data_size": 65536 00:24:50.551 } 00:24:50.551 ] 00:24:50.551 }' 00:24:50.551 12:44:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:50.551 12:44:33 -- common/autotest_common.sh@10 -- # set +x 00:24:51.120 12:44:33 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:51.121 12:44:33 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:51.380 [2024-10-01 12:44:33.713033] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:51.380 12:44:33 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:24:51.380 12:44:33 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.380 12:44:33 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:51.381 12:44:33 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:51.381 12:44:33 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:24:51.381 12:44:33 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:51.381 12:44:33 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:51.640 [2024-10-01 12:44:33.974498] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:51.640 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:51.640 Zero copy mechanism will not be used. 00:24:51.640 Running I/O for 60 seconds... 00:24:51.640 [2024-10-01 12:44:34.074809] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:51.640 [2024-10-01 12:44:34.085091] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.640 12:44:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.900 12:44:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:51.900 "name": "raid_bdev1", 00:24:51.900 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:24:51.900 "strip_size_kb": 0, 00:24:51.900 "state": "online", 00:24:51.900 "raid_level": "raid1", 00:24:51.900 "superblock": false, 00:24:51.900 "num_base_bdevs": 4, 00:24:51.900 "num_base_bdevs_discovered": 3, 00:24:51.900 "num_base_bdevs_operational": 3, 00:24:51.900 "base_bdevs_list": [ 00:24:51.900 { 00:24:51.900 "name": null, 00:24:51.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.900 "is_configured": false, 00:24:51.900 "data_offset": 0, 00:24:51.900 "data_size": 65536 00:24:51.900 }, 00:24:51.900 { 00:24:51.900 "name": "BaseBdev2", 00:24:51.900 "uuid": "e1be6eb8-80ea-47e8-bb9e-2893fb4af5d4", 00:24:51.900 "is_configured": true, 00:24:51.900 "data_offset": 0, 00:24:51.900 "data_size": 65536 00:24:51.900 }, 00:24:51.900 { 00:24:51.900 "name": "BaseBdev3", 00:24:51.900 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:24:51.900 "is_configured": true, 00:24:51.900 "data_offset": 0, 00:24:51.900 "data_size": 65536 00:24:51.900 }, 00:24:51.900 { 00:24:51.900 "name": "BaseBdev4", 00:24:51.900 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:24:51.900 "is_configured": true, 00:24:51.900 "data_offset": 0, 00:24:51.900 "data_size": 65536 00:24:51.900 } 00:24:51.900 ] 00:24:51.900 }' 00:24:51.900 12:44:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:51.900 12:44:34 -- common/autotest_common.sh@10 -- # set +x 00:24:52.506 12:44:34 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:52.506 [2024-10-01 12:44:34.979009] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:52.506 [2024-10-01 12:44:34.979078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:52.766 12:44:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:52.766 [2024-10-01 12:44:35.032205] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:24:52.766 [2024-10-01 12:44:35.034293] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:52.766 [2024-10-01 12:44:35.297423] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:52.766 [2024-10-01 12:44:35.298223] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:53.335 [2024-10-01 12:44:35.631453] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:53.335 [2024-10-01 12:44:35.636875] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:53.335 [2024-10-01 12:44:35.852350] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:53.335 [2024-10-01 12:44:35.852546] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:53.595 12:44:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.595 12:44:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:53.595 12:44:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:53.595 12:44:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:53.595 12:44:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:53.595 12:44:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.595 12:44:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.855 [2024-10-01 12:44:36.202218] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:53.855 [2024-10-01 12:44:36.203434] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:53.855 12:44:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:53.855 "name": "raid_bdev1", 00:24:53.855 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:24:53.855 "strip_size_kb": 0, 00:24:53.855 "state": "online", 00:24:53.855 "raid_level": "raid1", 00:24:53.855 "superblock": false, 00:24:53.855 "num_base_bdevs": 4, 00:24:53.855 "num_base_bdevs_discovered": 4, 00:24:53.855 "num_base_bdevs_operational": 4, 00:24:53.855 "process": { 00:24:53.855 "type": "rebuild", 00:24:53.855 "target": "spare", 00:24:53.855 "progress": { 00:24:53.855 "blocks": 14336, 00:24:53.855 "percent": 21 00:24:53.855 } 00:24:53.855 }, 00:24:53.855 "base_bdevs_list": [ 00:24:53.855 { 00:24:53.855 "name": "spare", 00:24:53.855 "uuid": "770a18f8-c600-5e7d-b277-ee50e6aed3e1", 00:24:53.855 "is_configured": true, 00:24:53.855 "data_offset": 0, 00:24:53.855 "data_size": 65536 00:24:53.855 }, 00:24:53.855 { 00:24:53.855 "name": "BaseBdev2", 00:24:53.855 "uuid": "e1be6eb8-80ea-47e8-bb9e-2893fb4af5d4", 00:24:53.855 "is_configured": true, 00:24:53.855 "data_offset": 0, 00:24:53.855 "data_size": 65536 00:24:53.855 }, 00:24:53.855 { 00:24:53.855 "name": "BaseBdev3", 00:24:53.855 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:24:53.855 "is_configured": true, 00:24:53.855 "data_offset": 0, 00:24:53.855 "data_size": 65536 00:24:53.855 }, 00:24:53.855 { 00:24:53.855 "name": "BaseBdev4", 00:24:53.855 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:24:53.855 "is_configured": true, 00:24:53.855 "data_offset": 0, 00:24:53.855 "data_size": 65536 00:24:53.855 } 00:24:53.855 ] 00:24:53.855 }' 00:24:53.855 12:44:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:53.855 12:44:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:53.855 12:44:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:53.855 12:44:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:53.855 12:44:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:54.115 [2024-10-01 12:44:36.441201] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:54.115 [2024-10-01 12:44:36.479302] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:54.115 [2024-10-01 12:44:36.571122] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:54.375 [2024-10-01 12:44:36.686152] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:54.375 [2024-10-01 12:44:36.695802] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.375 [2024-10-01 12:44:36.715092] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.375 12:44:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.636 12:44:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.636 "name": "raid_bdev1", 00:24:54.636 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:24:54.636 "strip_size_kb": 0, 00:24:54.636 "state": "online", 00:24:54.636 "raid_level": "raid1", 00:24:54.636 "superblock": false, 00:24:54.636 "num_base_bdevs": 4, 00:24:54.636 "num_base_bdevs_discovered": 3, 00:24:54.636 "num_base_bdevs_operational": 3, 00:24:54.636 "base_bdevs_list": [ 00:24:54.636 { 00:24:54.636 "name": null, 00:24:54.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.636 "is_configured": false, 00:24:54.636 "data_offset": 0, 00:24:54.636 "data_size": 65536 00:24:54.636 }, 00:24:54.636 { 00:24:54.636 "name": "BaseBdev2", 00:24:54.636 "uuid": "e1be6eb8-80ea-47e8-bb9e-2893fb4af5d4", 00:24:54.636 "is_configured": true, 00:24:54.636 "data_offset": 0, 00:24:54.636 "data_size": 65536 00:24:54.636 }, 00:24:54.636 { 00:24:54.636 "name": "BaseBdev3", 00:24:54.636 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:24:54.636 "is_configured": true, 00:24:54.636 "data_offset": 0, 00:24:54.636 "data_size": 65536 00:24:54.636 }, 00:24:54.636 { 00:24:54.636 "name": "BaseBdev4", 00:24:54.636 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:24:54.636 "is_configured": true, 00:24:54.636 "data_offset": 0, 00:24:54.636 "data_size": 65536 00:24:54.636 } 00:24:54.636 ] 00:24:54.636 }' 00:24:54.636 12:44:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.636 12:44:36 -- common/autotest_common.sh@10 -- # set +x 00:24:55.205 12:44:37 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:55.205 12:44:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:55.205 12:44:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:55.205 12:44:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:55.205 12:44:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:55.205 12:44:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.205 12:44:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.205 12:44:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:55.205 "name": "raid_bdev1", 00:24:55.205 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:24:55.205 "strip_size_kb": 0, 00:24:55.205 "state": "online", 00:24:55.205 "raid_level": "raid1", 00:24:55.205 "superblock": false, 00:24:55.205 "num_base_bdevs": 4, 00:24:55.205 "num_base_bdevs_discovered": 3, 00:24:55.205 "num_base_bdevs_operational": 3, 00:24:55.205 "base_bdevs_list": [ 00:24:55.205 { 00:24:55.205 "name": null, 00:24:55.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.205 "is_configured": false, 00:24:55.205 "data_offset": 0, 00:24:55.205 "data_size": 65536 00:24:55.205 }, 00:24:55.205 { 00:24:55.205 "name": "BaseBdev2", 00:24:55.205 "uuid": "e1be6eb8-80ea-47e8-bb9e-2893fb4af5d4", 00:24:55.205 "is_configured": true, 00:24:55.205 "data_offset": 0, 00:24:55.205 "data_size": 65536 00:24:55.205 }, 00:24:55.205 { 00:24:55.205 "name": "BaseBdev3", 00:24:55.205 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:24:55.205 "is_configured": true, 00:24:55.205 "data_offset": 0, 00:24:55.206 "data_size": 65536 00:24:55.206 }, 00:24:55.206 { 00:24:55.206 "name": "BaseBdev4", 00:24:55.206 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:24:55.206 "is_configured": true, 00:24:55.206 "data_offset": 0, 00:24:55.206 "data_size": 65536 00:24:55.206 } 00:24:55.206 ] 00:24:55.206 }' 00:24:55.206 12:44:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:55.465 12:44:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:55.465 12:44:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:55.465 12:44:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:55.465 12:44:37 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:55.465 [2024-10-01 12:44:37.987489] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:55.465 [2024-10-01 12:44:37.987555] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:55.725 12:44:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:55.725 [2024-10-01 12:44:38.040705] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:55.725 [2024-10-01 12:44:38.042818] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:55.725 [2024-10-01 12:44:38.156762] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:55.725 [2024-10-01 12:44:38.157867] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:55.984 [2024-10-01 12:44:38.380688] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:55.984 [2024-10-01 12:44:38.380905] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:56.244 [2024-10-01 12:44:38.729111] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:56.244 [2024-10-01 12:44:38.730429] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:56.503 [2024-10-01 12:44:38.953238] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:56.761 12:44:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:56.761 12:44:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:56.761 12:44:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:56.761 12:44:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:56.761 12:44:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:56.761 12:44:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.761 12:44:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.761 12:44:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:56.761 "name": "raid_bdev1", 00:24:56.761 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:24:56.761 "strip_size_kb": 0, 00:24:56.761 "state": "online", 00:24:56.761 "raid_level": "raid1", 00:24:56.761 "superblock": false, 00:24:56.761 "num_base_bdevs": 4, 00:24:56.761 "num_base_bdevs_discovered": 4, 00:24:56.761 "num_base_bdevs_operational": 4, 00:24:56.761 "process": { 00:24:56.761 "type": "rebuild", 00:24:56.761 "target": "spare", 00:24:56.762 "progress": { 00:24:56.762 "blocks": 14336, 00:24:56.762 "percent": 21 00:24:56.762 } 00:24:56.762 }, 00:24:56.762 "base_bdevs_list": [ 00:24:56.762 { 00:24:56.762 "name": "spare", 00:24:56.762 "uuid": "770a18f8-c600-5e7d-b277-ee50e6aed3e1", 00:24:56.762 "is_configured": true, 00:24:56.762 "data_offset": 0, 00:24:56.762 "data_size": 65536 00:24:56.762 }, 00:24:56.762 { 00:24:56.762 "name": "BaseBdev2", 00:24:56.762 "uuid": "e1be6eb8-80ea-47e8-bb9e-2893fb4af5d4", 00:24:56.762 "is_configured": true, 00:24:56.762 "data_offset": 0, 00:24:56.762 "data_size": 65536 00:24:56.762 }, 00:24:56.762 { 00:24:56.762 "name": "BaseBdev3", 00:24:56.762 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:24:56.762 "is_configured": true, 00:24:56.762 "data_offset": 0, 00:24:56.762 "data_size": 65536 00:24:56.762 }, 00:24:56.762 { 00:24:56.762 "name": "BaseBdev4", 00:24:56.762 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:24:56.762 "is_configured": true, 00:24:56.762 "data_offset": 0, 00:24:56.762 "data_size": 65536 00:24:56.762 } 00:24:56.762 ] 00:24:56.762 }' 00:24:56.762 12:44:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:56.762 12:44:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:56.762 12:44:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:56.762 [2024-10-01 12:44:39.286776] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:57.021 12:44:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.021 12:44:39 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:57.021 12:44:39 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:57.021 12:44:39 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:57.021 12:44:39 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:24:57.021 12:44:39 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:57.021 [2024-10-01 12:44:39.491472] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:57.021 [2024-10-01 12:44:39.544516] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:57.281 [2024-10-01 12:44:39.659143] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005a00 00:24:57.281 [2024-10-01 12:44:39.659177] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:24:57.281 [2024-10-01 12:44:39.674895] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:57.281 12:44:39 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:24:57.281 12:44:39 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:24:57.281 12:44:39 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.281 12:44:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:57.281 12:44:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:57.281 12:44:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:57.281 12:44:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:57.281 12:44:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.281 12:44:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.281 [2024-10-01 12:44:39.797116] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:57.541 12:44:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:57.541 "name": "raid_bdev1", 00:24:57.541 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:24:57.541 "strip_size_kb": 0, 00:24:57.541 "state": "online", 00:24:57.541 "raid_level": "raid1", 00:24:57.541 "superblock": false, 00:24:57.541 "num_base_bdevs": 4, 00:24:57.541 "num_base_bdevs_discovered": 3, 00:24:57.541 "num_base_bdevs_operational": 3, 00:24:57.541 "process": { 00:24:57.541 "type": "rebuild", 00:24:57.541 "target": "spare", 00:24:57.541 "progress": { 00:24:57.541 "blocks": 22528, 00:24:57.541 "percent": 34 00:24:57.541 } 00:24:57.541 }, 00:24:57.541 "base_bdevs_list": [ 00:24:57.541 { 00:24:57.541 "name": "spare", 00:24:57.541 "uuid": "770a18f8-c600-5e7d-b277-ee50e6aed3e1", 00:24:57.541 "is_configured": true, 00:24:57.541 "data_offset": 0, 00:24:57.541 "data_size": 65536 00:24:57.541 }, 00:24:57.541 { 00:24:57.541 "name": null, 00:24:57.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.541 "is_configured": false, 00:24:57.541 "data_offset": 0, 00:24:57.541 "data_size": 65536 00:24:57.541 }, 00:24:57.541 { 00:24:57.541 "name": "BaseBdev3", 00:24:57.541 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:24:57.541 "is_configured": true, 00:24:57.541 "data_offset": 0, 00:24:57.541 "data_size": 65536 00:24:57.541 }, 00:24:57.541 { 00:24:57.541 "name": "BaseBdev4", 00:24:57.541 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:24:57.541 "is_configured": true, 00:24:57.541 "data_offset": 0, 00:24:57.541 "data_size": 65536 00:24:57.541 } 00:24:57.541 ] 00:24:57.542 }' 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@657 -- # local timeout=466 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.542 12:44:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.801 12:44:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:57.801 "name": "raid_bdev1", 00:24:57.801 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:24:57.801 "strip_size_kb": 0, 00:24:57.801 "state": "online", 00:24:57.801 "raid_level": "raid1", 00:24:57.801 "superblock": false, 00:24:57.801 "num_base_bdevs": 4, 00:24:57.801 "num_base_bdevs_discovered": 3, 00:24:57.801 "num_base_bdevs_operational": 3, 00:24:57.801 "process": { 00:24:57.801 "type": "rebuild", 00:24:57.801 "target": "spare", 00:24:57.801 "progress": { 00:24:57.801 "blocks": 26624, 00:24:57.801 "percent": 40 00:24:57.801 } 00:24:57.801 }, 00:24:57.801 "base_bdevs_list": [ 00:24:57.801 { 00:24:57.801 "name": "spare", 00:24:57.801 "uuid": "770a18f8-c600-5e7d-b277-ee50e6aed3e1", 00:24:57.801 "is_configured": true, 00:24:57.801 "data_offset": 0, 00:24:57.801 "data_size": 65536 00:24:57.801 }, 00:24:57.801 { 00:24:57.801 "name": null, 00:24:57.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.801 "is_configured": false, 00:24:57.801 "data_offset": 0, 00:24:57.801 "data_size": 65536 00:24:57.801 }, 00:24:57.801 { 00:24:57.801 "name": "BaseBdev3", 00:24:57.801 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:24:57.801 "is_configured": true, 00:24:57.801 "data_offset": 0, 00:24:57.801 "data_size": 65536 00:24:57.801 }, 00:24:57.801 { 00:24:57.801 "name": "BaseBdev4", 00:24:57.801 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:24:57.801 "is_configured": true, 00:24:57.801 "data_offset": 0, 00:24:57.801 "data_size": 65536 00:24:57.801 } 00:24:57.801 ] 00:24:57.801 }' 00:24:57.801 12:44:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:57.801 12:44:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.801 12:44:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:57.801 [2024-10-01 12:44:40.234244] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:57.801 12:44:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.801 12:44:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:58.369 [2024-10-01 12:44:40.671488] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:58.369 [2024-10-01 12:44:40.887392] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:58.628 [2024-10-01 12:44:40.993537] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:58.888 12:44:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:58.888 12:44:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:58.888 12:44:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:58.888 12:44:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:58.888 12:44:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:58.888 12:44:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:58.888 12:44:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.888 12:44:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.148 12:44:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:59.148 "name": "raid_bdev1", 00:24:59.148 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:24:59.148 "strip_size_kb": 0, 00:24:59.148 "state": "online", 00:24:59.148 "raid_level": "raid1", 00:24:59.148 "superblock": false, 00:24:59.148 "num_base_bdevs": 4, 00:24:59.148 "num_base_bdevs_discovered": 3, 00:24:59.148 "num_base_bdevs_operational": 3, 00:24:59.148 "process": { 00:24:59.148 "type": "rebuild", 00:24:59.148 "target": "spare", 00:24:59.148 "progress": { 00:24:59.148 "blocks": 47104, 00:24:59.148 "percent": 71 00:24:59.148 } 00:24:59.148 }, 00:24:59.148 "base_bdevs_list": [ 00:24:59.148 { 00:24:59.148 "name": "spare", 00:24:59.148 "uuid": "770a18f8-c600-5e7d-b277-ee50e6aed3e1", 00:24:59.148 "is_configured": true, 00:24:59.148 "data_offset": 0, 00:24:59.148 "data_size": 65536 00:24:59.148 }, 00:24:59.148 { 00:24:59.148 "name": null, 00:24:59.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.148 "is_configured": false, 00:24:59.148 "data_offset": 0, 00:24:59.148 "data_size": 65536 00:24:59.148 }, 00:24:59.148 { 00:24:59.148 "name": "BaseBdev3", 00:24:59.148 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:24:59.148 "is_configured": true, 00:24:59.148 "data_offset": 0, 00:24:59.148 "data_size": 65536 00:24:59.148 }, 00:24:59.148 { 00:24:59.148 "name": "BaseBdev4", 00:24:59.148 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:24:59.148 "is_configured": true, 00:24:59.148 "data_offset": 0, 00:24:59.148 "data_size": 65536 00:24:59.148 } 00:24:59.148 ] 00:24:59.148 }' 00:24:59.148 12:44:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:59.148 12:44:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.148 12:44:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:59.148 12:44:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.148 12:44:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:00.086 [2024-10-01 12:44:42.375896] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:00.086 [2024-10-01 12:44:42.480766] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:00.086 [2024-10-01 12:44:42.484017] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.086 12:44:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:00.086 12:44:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:00.086 12:44:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:00.086 12:44:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:00.086 12:44:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:00.086 12:44:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:00.086 12:44:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.086 12:44:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:00.346 "name": "raid_bdev1", 00:25:00.346 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:25:00.346 "strip_size_kb": 0, 00:25:00.346 "state": "online", 00:25:00.346 "raid_level": "raid1", 00:25:00.346 "superblock": false, 00:25:00.346 "num_base_bdevs": 4, 00:25:00.346 "num_base_bdevs_discovered": 3, 00:25:00.346 "num_base_bdevs_operational": 3, 00:25:00.346 "base_bdevs_list": [ 00:25:00.346 { 00:25:00.346 "name": "spare", 00:25:00.346 "uuid": "770a18f8-c600-5e7d-b277-ee50e6aed3e1", 00:25:00.346 "is_configured": true, 00:25:00.346 "data_offset": 0, 00:25:00.346 "data_size": 65536 00:25:00.346 }, 00:25:00.346 { 00:25:00.346 "name": null, 00:25:00.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.346 "is_configured": false, 00:25:00.346 "data_offset": 0, 00:25:00.346 "data_size": 65536 00:25:00.346 }, 00:25:00.346 { 00:25:00.346 "name": "BaseBdev3", 00:25:00.346 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:25:00.346 "is_configured": true, 00:25:00.346 "data_offset": 0, 00:25:00.346 "data_size": 65536 00:25:00.346 }, 00:25:00.346 { 00:25:00.346 "name": "BaseBdev4", 00:25:00.346 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:25:00.346 "is_configured": true, 00:25:00.346 "data_offset": 0, 00:25:00.346 "data_size": 65536 00:25:00.346 } 00:25:00.346 ] 00:25:00.346 }' 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@660 -- # break 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.346 12:44:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.606 12:44:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:00.606 "name": "raid_bdev1", 00:25:00.606 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:25:00.606 "strip_size_kb": 0, 00:25:00.606 "state": "online", 00:25:00.606 "raid_level": "raid1", 00:25:00.606 "superblock": false, 00:25:00.606 "num_base_bdevs": 4, 00:25:00.606 "num_base_bdevs_discovered": 3, 00:25:00.606 "num_base_bdevs_operational": 3, 00:25:00.606 "base_bdevs_list": [ 00:25:00.606 { 00:25:00.606 "name": "spare", 00:25:00.606 "uuid": "770a18f8-c600-5e7d-b277-ee50e6aed3e1", 00:25:00.606 "is_configured": true, 00:25:00.606 "data_offset": 0, 00:25:00.606 "data_size": 65536 00:25:00.606 }, 00:25:00.606 { 00:25:00.606 "name": null, 00:25:00.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.606 "is_configured": false, 00:25:00.606 "data_offset": 0, 00:25:00.606 "data_size": 65536 00:25:00.606 }, 00:25:00.606 { 00:25:00.606 "name": "BaseBdev3", 00:25:00.606 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:25:00.606 "is_configured": true, 00:25:00.606 "data_offset": 0, 00:25:00.606 "data_size": 65536 00:25:00.606 }, 00:25:00.606 { 00:25:00.606 "name": "BaseBdev4", 00:25:00.606 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:25:00.606 "is_configured": true, 00:25:00.606 "data_offset": 0, 00:25:00.606 "data_size": 65536 00:25:00.606 } 00:25:00.606 ] 00:25:00.606 }' 00:25:00.606 12:44:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.606 12:44:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.866 12:44:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:00.866 "name": "raid_bdev1", 00:25:00.866 "uuid": "d1375ffc-e546-4572-90d5-bb1b83d28c2f", 00:25:00.866 "strip_size_kb": 0, 00:25:00.866 "state": "online", 00:25:00.866 "raid_level": "raid1", 00:25:00.866 "superblock": false, 00:25:00.866 "num_base_bdevs": 4, 00:25:00.866 "num_base_bdevs_discovered": 3, 00:25:00.866 "num_base_bdevs_operational": 3, 00:25:00.866 "base_bdevs_list": [ 00:25:00.866 { 00:25:00.866 "name": "spare", 00:25:00.866 "uuid": "770a18f8-c600-5e7d-b277-ee50e6aed3e1", 00:25:00.866 "is_configured": true, 00:25:00.866 "data_offset": 0, 00:25:00.866 "data_size": 65536 00:25:00.866 }, 00:25:00.866 { 00:25:00.866 "name": null, 00:25:00.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.866 "is_configured": false, 00:25:00.866 "data_offset": 0, 00:25:00.866 "data_size": 65536 00:25:00.866 }, 00:25:00.866 { 00:25:00.866 "name": "BaseBdev3", 00:25:00.866 "uuid": "b2de03fb-a0d4-4958-a2a1-eaa89a78e5a3", 00:25:00.866 "is_configured": true, 00:25:00.866 "data_offset": 0, 00:25:00.866 "data_size": 65536 00:25:00.866 }, 00:25:00.866 { 00:25:00.866 "name": "BaseBdev4", 00:25:00.866 "uuid": "896b92cc-8684-4cfa-97c3-010ea3683aa1", 00:25:00.866 "is_configured": true, 00:25:00.866 "data_offset": 0, 00:25:00.866 "data_size": 65536 00:25:00.866 } 00:25:00.866 ] 00:25:00.866 }' 00:25:00.866 12:44:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:00.866 12:44:43 -- common/autotest_common.sh@10 -- # set +x 00:25:01.434 12:44:43 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:01.434 [2024-10-01 12:44:43.909106] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:01.434 [2024-10-01 12:44:43.909145] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:01.694 00:25:01.694 Latency(us) 00:25:01.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.694 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:01.694 raid_bdev1 : 10.02 113.92 341.77 0.00 0.00 12192.15 304.32 115385.47 00:25:01.694 =================================================================================================================== 00:25:01.694 Total : 113.92 341.77 0.00 0.00 12192.15 304.32 115385.47 00:25:01.694 [2024-10-01 12:44:44.006133] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.694 [2024-10-01 12:44:44.006174] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:01.694 [2024-10-01 12:44:44.006261] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:01.694 [2024-10-01 12:44:44.006271] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:25:01.694 0 00:25:01.694 12:44:44 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.694 12:44:44 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:01.694 12:44:44 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:01.694 12:44:44 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:25:01.694 12:44:44 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:01.694 12:44:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:01.694 12:44:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:01.694 12:44:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:01.694 12:44:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:01.694 12:44:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:01.694 12:44:44 -- bdev/nbd_common.sh@12 -- # local i 00:25:01.694 12:44:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:01.694 12:44:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:01.694 12:44:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:01.954 /dev/nbd0 00:25:01.954 12:44:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:01.954 12:44:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:01.954 12:44:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:01.954 12:44:44 -- common/autotest_common.sh@857 -- # local i 00:25:01.954 12:44:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:01.954 12:44:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:01.954 12:44:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:01.954 12:44:44 -- common/autotest_common.sh@861 -- # break 00:25:01.954 12:44:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:01.954 12:44:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:01.954 12:44:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:01.954 1+0 records in 00:25:01.954 1+0 records out 00:25:01.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601561 s, 6.8 MB/s 00:25:01.954 12:44:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:01.954 12:44:44 -- common/autotest_common.sh@874 -- # size=4096 00:25:01.954 12:44:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:01.954 12:44:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:01.954 12:44:44 -- common/autotest_common.sh@877 -- # return 0 00:25:01.954 12:44:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:01.954 12:44:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:01.954 12:44:44 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:01.954 12:44:44 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:25:01.954 12:44:44 -- bdev/bdev_raid.sh@678 -- # continue 00:25:01.954 12:44:44 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:01.954 12:44:44 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:25:01.954 12:44:44 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:01.954 12:44:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:01.954 12:44:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:01.954 12:44:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:01.954 12:44:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:01.955 12:44:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:01.955 12:44:44 -- bdev/nbd_common.sh@12 -- # local i 00:25:01.955 12:44:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:01.955 12:44:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:01.955 12:44:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:02.214 /dev/nbd1 00:25:02.214 12:44:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:02.214 12:44:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:02.214 12:44:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:02.214 12:44:44 -- common/autotest_common.sh@857 -- # local i 00:25:02.214 12:44:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:02.214 12:44:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:02.214 12:44:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:02.214 12:44:44 -- common/autotest_common.sh@861 -- # break 00:25:02.214 12:44:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:02.214 12:44:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:02.214 12:44:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:02.214 1+0 records in 00:25:02.214 1+0 records out 00:25:02.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618093 s, 6.6 MB/s 00:25:02.214 12:44:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:02.214 12:44:44 -- common/autotest_common.sh@874 -- # size=4096 00:25:02.214 12:44:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:02.214 12:44:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:02.214 12:44:44 -- common/autotest_common.sh@877 -- # return 0 00:25:02.214 12:44:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:02.214 12:44:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:02.214 12:44:44 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:02.474 12:44:44 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:02.474 12:44:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:02.474 12:44:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:02.474 12:44:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:02.474 12:44:44 -- bdev/nbd_common.sh@51 -- # local i 00:25:02.474 12:44:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:02.474 12:44:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@41 -- # break 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@45 -- # return 0 00:25:02.733 12:44:45 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:02.733 12:44:45 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:25:02.733 12:44:45 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@12 -- # local i 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:02.733 12:44:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:02.993 /dev/nbd1 00:25:02.993 12:44:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:02.993 12:44:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:02.993 12:44:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:02.993 12:44:45 -- common/autotest_common.sh@857 -- # local i 00:25:02.993 12:44:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:02.993 12:44:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:02.993 12:44:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:02.993 12:44:45 -- common/autotest_common.sh@861 -- # break 00:25:02.993 12:44:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:02.993 12:44:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:02.993 12:44:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:02.993 1+0 records in 00:25:02.993 1+0 records out 00:25:02.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595611 s, 6.9 MB/s 00:25:02.993 12:44:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:02.993 12:44:45 -- common/autotest_common.sh@874 -- # size=4096 00:25:02.993 12:44:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:02.993 12:44:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:02.993 12:44:45 -- common/autotest_common.sh@877 -- # return 0 00:25:02.993 12:44:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:02.993 12:44:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:02.993 12:44:45 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:02.993 12:44:45 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:02.993 12:44:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:02.993 12:44:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:02.993 12:44:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:02.993 12:44:45 -- bdev/nbd_common.sh@51 -- # local i 00:25:02.993 12:44:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:02.993 12:44:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@41 -- # break 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.252 12:44:45 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@51 -- # local i 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:03.252 12:44:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:03.512 12:44:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:03.512 12:44:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:03.512 12:44:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:03.512 12:44:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.512 12:44:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.512 12:44:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:03.512 12:44:45 -- bdev/nbd_common.sh@41 -- # break 00:25:03.512 12:44:45 -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.512 12:44:45 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:03.512 12:44:45 -- bdev/bdev_raid.sh@709 -- # killprocess 126115 00:25:03.512 12:44:45 -- common/autotest_common.sh@926 -- # '[' -z 126115 ']' 00:25:03.512 12:44:45 -- common/autotest_common.sh@930 -- # kill -0 126115 00:25:03.512 12:44:45 -- common/autotest_common.sh@931 -- # uname 00:25:03.512 12:44:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:03.512 12:44:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126115 00:25:03.512 12:44:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:03.512 12:44:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:03.512 12:44:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126115' 00:25:03.512 killing process with pid 126115 00:25:03.512 12:44:45 -- common/autotest_common.sh@945 -- # kill 126115 00:25:03.512 Received shutdown signal, test time was about 11.944876 seconds 00:25:03.512 00:25:03.512 Latency(us) 00:25:03.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.512 =================================================================================================================== 00:25:03.512 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:03.512 [2024-10-01 12:44:45.902240] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:03.512 12:44:45 -- common/autotest_common.sh@950 -- # wait 126115 00:25:04.081 [2024-10-01 12:44:46.341553] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:05.461 ************************************ 00:25:05.461 END TEST raid_rebuild_test_io 00:25:05.461 ************************************ 00:25:05.461 00:25:05.461 real 0m17.614s 00:25:05.461 user 0m25.567s 00:25:05.461 sys 0m2.613s 00:25:05.461 12:44:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.461 12:44:47 -- common/autotest_common.sh@10 -- # set +x 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:25:05.461 12:44:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:05.461 12:44:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:05.461 12:44:47 -- common/autotest_common.sh@10 -- # set +x 00:25:05.461 ************************************ 00:25:05.461 START TEST raid_rebuild_test_sb_io 00:25:05.461 ************************************ 00:25:05.461 12:44:47 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@544 -- # raid_pid=126620 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126620 /var/tmp/spdk-raid.sock 00:25:05.461 12:44:47 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:05.461 12:44:47 -- common/autotest_common.sh@819 -- # '[' -z 126620 ']' 00:25:05.461 12:44:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:05.461 12:44:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:05.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:05.461 12:44:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:05.461 12:44:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:05.461 12:44:47 -- common/autotest_common.sh@10 -- # set +x 00:25:05.461 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:05.461 Zero copy mechanism will not be used. 00:25:05.461 [2024-10-01 12:44:47.981744] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:05.461 [2024-10-01 12:44:47.981929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126620 ] 00:25:05.721 [2024-10-01 12:44:48.149811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.981 [2024-10-01 12:44:48.374418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.240 [2024-10-01 12:44:48.634093] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:07.207 12:44:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:07.207 12:44:49 -- common/autotest_common.sh@852 -- # return 0 00:25:07.207 12:44:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:07.207 12:44:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:07.207 12:44:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:07.207 BaseBdev1_malloc 00:25:07.207 12:44:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:07.467 [2024-10-01 12:44:49.859369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:07.467 [2024-10-01 12:44:49.859456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.467 [2024-10-01 12:44:49.859491] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:07.467 [2024-10-01 12:44:49.859534] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.467 [2024-10-01 12:44:49.861899] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.467 [2024-10-01 12:44:49.861944] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:07.467 BaseBdev1 00:25:07.467 12:44:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:07.467 12:44:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:07.467 12:44:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:07.726 BaseBdev2_malloc 00:25:07.726 12:44:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:07.985 [2024-10-01 12:44:50.301607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:07.985 [2024-10-01 12:44:50.301707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.985 [2024-10-01 12:44:50.301752] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:07.985 [2024-10-01 12:44:50.301808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.985 [2024-10-01 12:44:50.304348] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.985 [2024-10-01 12:44:50.304401] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:07.985 BaseBdev2 00:25:07.985 12:44:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:07.985 12:44:50 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:07.985 12:44:50 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:08.244 BaseBdev3_malloc 00:25:08.244 12:44:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:08.244 [2024-10-01 12:44:50.705894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:08.244 [2024-10-01 12:44:50.705971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.244 [2024-10-01 12:44:50.706009] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:08.244 [2024-10-01 12:44:50.706051] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.244 [2024-10-01 12:44:50.708470] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.244 [2024-10-01 12:44:50.708523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:08.244 BaseBdev3 00:25:08.244 12:44:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:08.244 12:44:50 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:08.244 12:44:50 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:08.503 BaseBdev4_malloc 00:25:08.503 12:44:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:08.762 [2024-10-01 12:44:51.123198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:08.762 [2024-10-01 12:44:51.123273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.762 [2024-10-01 12:44:51.123303] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:08.762 [2024-10-01 12:44:51.123344] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.762 [2024-10-01 12:44:51.125812] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.762 [2024-10-01 12:44:51.125874] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:08.762 BaseBdev4 00:25:08.762 12:44:51 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:09.022 spare_malloc 00:25:09.022 12:44:51 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:09.022 spare_delay 00:25:09.022 12:44:51 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:09.281 [2024-10-01 12:44:51.716995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:09.281 [2024-10-01 12:44:51.717067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.281 [2024-10-01 12:44:51.717096] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:09.281 [2024-10-01 12:44:51.717136] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.281 [2024-10-01 12:44:51.719294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.281 [2024-10-01 12:44:51.719351] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:09.281 spare 00:25:09.281 12:44:51 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:09.540 [2024-10-01 12:44:51.892833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:09.540 [2024-10-01 12:44:51.894856] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:09.540 [2024-10-01 12:44:51.894931] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:09.540 [2024-10-01 12:44:51.894972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:09.540 [2024-10-01 12:44:51.895131] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:25:09.541 [2024-10-01 12:44:51.895139] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:09.541 [2024-10-01 12:44:51.895252] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:09.541 [2024-10-01 12:44:51.895565] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:25:09.541 [2024-10-01 12:44:51.895583] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:25:09.541 [2024-10-01 12:44:51.895700] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.541 12:44:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.800 12:44:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:09.800 "name": "raid_bdev1", 00:25:09.800 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:09.800 "strip_size_kb": 0, 00:25:09.800 "state": "online", 00:25:09.800 "raid_level": "raid1", 00:25:09.800 "superblock": true, 00:25:09.800 "num_base_bdevs": 4, 00:25:09.800 "num_base_bdevs_discovered": 4, 00:25:09.800 "num_base_bdevs_operational": 4, 00:25:09.800 "base_bdevs_list": [ 00:25:09.800 { 00:25:09.800 "name": "BaseBdev1", 00:25:09.800 "uuid": "4a67bae5-2e9f-5712-8fc0-3d49970af0ea", 00:25:09.800 "is_configured": true, 00:25:09.800 "data_offset": 2048, 00:25:09.800 "data_size": 63488 00:25:09.800 }, 00:25:09.800 { 00:25:09.800 "name": "BaseBdev2", 00:25:09.800 "uuid": "6c54862e-12e8-55dc-bedc-a302156054a4", 00:25:09.800 "is_configured": true, 00:25:09.800 "data_offset": 2048, 00:25:09.800 "data_size": 63488 00:25:09.800 }, 00:25:09.800 { 00:25:09.800 "name": "BaseBdev3", 00:25:09.800 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:09.800 "is_configured": true, 00:25:09.800 "data_offset": 2048, 00:25:09.800 "data_size": 63488 00:25:09.800 }, 00:25:09.800 { 00:25:09.800 "name": "BaseBdev4", 00:25:09.800 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:09.800 "is_configured": true, 00:25:09.800 "data_offset": 2048, 00:25:09.800 "data_size": 63488 00:25:09.800 } 00:25:09.800 ] 00:25:09.800 }' 00:25:09.800 12:44:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:09.800 12:44:52 -- common/autotest_common.sh@10 -- # set +x 00:25:10.369 12:44:52 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:10.369 12:44:52 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:10.369 [2024-10-01 12:44:52.795620] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:10.369 12:44:52 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:25:10.369 12:44:52 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:10.369 12:44:52 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.629 12:44:52 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:10.629 12:44:52 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:25:10.629 12:44:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:10.629 12:44:52 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:10.629 [2024-10-01 12:44:53.084983] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:10.629 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:10.629 Zero copy mechanism will not be used. 00:25:10.629 Running I/O for 60 seconds... 00:25:10.889 [2024-10-01 12:44:53.174475] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:10.889 [2024-10-01 12:44:53.174675] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:10.889 "name": "raid_bdev1", 00:25:10.889 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:10.889 "strip_size_kb": 0, 00:25:10.889 "state": "online", 00:25:10.889 "raid_level": "raid1", 00:25:10.889 "superblock": true, 00:25:10.889 "num_base_bdevs": 4, 00:25:10.889 "num_base_bdevs_discovered": 3, 00:25:10.889 "num_base_bdevs_operational": 3, 00:25:10.889 "base_bdevs_list": [ 00:25:10.889 { 00:25:10.889 "name": null, 00:25:10.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.889 "is_configured": false, 00:25:10.889 "data_offset": 2048, 00:25:10.889 "data_size": 63488 00:25:10.889 }, 00:25:10.889 { 00:25:10.889 "name": "BaseBdev2", 00:25:10.889 "uuid": "6c54862e-12e8-55dc-bedc-a302156054a4", 00:25:10.889 "is_configured": true, 00:25:10.889 "data_offset": 2048, 00:25:10.889 "data_size": 63488 00:25:10.889 }, 00:25:10.889 { 00:25:10.889 "name": "BaseBdev3", 00:25:10.889 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:10.889 "is_configured": true, 00:25:10.889 "data_offset": 2048, 00:25:10.889 "data_size": 63488 00:25:10.889 }, 00:25:10.889 { 00:25:10.889 "name": "BaseBdev4", 00:25:10.889 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:10.889 "is_configured": true, 00:25:10.889 "data_offset": 2048, 00:25:10.889 "data_size": 63488 00:25:10.889 } 00:25:10.889 ] 00:25:10.889 }' 00:25:10.889 12:44:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:10.889 12:44:53 -- common/autotest_common.sh@10 -- # set +x 00:25:11.457 12:44:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:11.716 [2024-10-01 12:44:54.103040] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:11.716 [2024-10-01 12:44:54.103109] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:11.716 12:44:54 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:11.716 [2024-10-01 12:44:54.144238] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:11.716 [2024-10-01 12:44:54.146331] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:11.976 [2024-10-01 12:44:54.252164] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:11.976 [2024-10-01 12:44:54.252462] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:11.976 [2024-10-01 12:44:54.469417] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:11.976 [2024-10-01 12:44:54.470285] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:12.545 [2024-10-01 12:44:54.777520] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:12.545 [2024-10-01 12:44:54.900270] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:12.804 [2024-10-01 12:44:55.138173] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:12.804 12:44:55 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:12.804 12:44:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:12.804 12:44:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:12.805 12:44:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:12.805 12:44:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:12.805 12:44:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.805 12:44:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.805 12:44:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:12.805 "name": "raid_bdev1", 00:25:12.805 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:12.805 "strip_size_kb": 0, 00:25:12.805 "state": "online", 00:25:12.805 "raid_level": "raid1", 00:25:12.805 "superblock": true, 00:25:12.805 "num_base_bdevs": 4, 00:25:12.805 "num_base_bdevs_discovered": 4, 00:25:12.805 "num_base_bdevs_operational": 4, 00:25:12.805 "process": { 00:25:12.805 "type": "rebuild", 00:25:12.805 "target": "spare", 00:25:12.805 "progress": { 00:25:12.805 "blocks": 14336, 00:25:12.805 "percent": 22 00:25:12.805 } 00:25:12.805 }, 00:25:12.805 "base_bdevs_list": [ 00:25:12.805 { 00:25:12.805 "name": "spare", 00:25:12.805 "uuid": "30e63a72-cdf7-5483-a0d2-f36270f25e8e", 00:25:12.805 "is_configured": true, 00:25:12.805 "data_offset": 2048, 00:25:12.805 "data_size": 63488 00:25:12.805 }, 00:25:12.805 { 00:25:12.805 "name": "BaseBdev2", 00:25:12.805 "uuid": "6c54862e-12e8-55dc-bedc-a302156054a4", 00:25:12.805 "is_configured": true, 00:25:12.805 "data_offset": 2048, 00:25:12.805 "data_size": 63488 00:25:12.805 }, 00:25:12.805 { 00:25:12.805 "name": "BaseBdev3", 00:25:12.805 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:12.805 "is_configured": true, 00:25:12.805 "data_offset": 2048, 00:25:12.805 "data_size": 63488 00:25:12.805 }, 00:25:12.805 { 00:25:12.805 "name": "BaseBdev4", 00:25:12.805 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:12.805 "is_configured": true, 00:25:12.805 "data_offset": 2048, 00:25:12.805 "data_size": 63488 00:25:12.805 } 00:25:12.805 ] 00:25:12.805 }' 00:25:13.064 12:44:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:13.064 [2024-10-01 12:44:55.354996] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:13.064 [2024-10-01 12:44:55.355677] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:13.064 12:44:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.064 12:44:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:13.064 12:44:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.064 12:44:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:13.323 [2024-10-01 12:44:55.600077] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:13.323 [2024-10-01 12:44:55.688509] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:13.323 [2024-10-01 12:44:55.695867] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:13.323 [2024-10-01 12:44:55.705794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.323 [2024-10-01 12:44:55.734759] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.323 12:44:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.582 12:44:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:13.582 "name": "raid_bdev1", 00:25:13.582 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:13.582 "strip_size_kb": 0, 00:25:13.582 "state": "online", 00:25:13.582 "raid_level": "raid1", 00:25:13.582 "superblock": true, 00:25:13.582 "num_base_bdevs": 4, 00:25:13.582 "num_base_bdevs_discovered": 3, 00:25:13.582 "num_base_bdevs_operational": 3, 00:25:13.582 "base_bdevs_list": [ 00:25:13.582 { 00:25:13.582 "name": null, 00:25:13.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.582 "is_configured": false, 00:25:13.582 "data_offset": 2048, 00:25:13.582 "data_size": 63488 00:25:13.582 }, 00:25:13.582 { 00:25:13.582 "name": "BaseBdev2", 00:25:13.582 "uuid": "6c54862e-12e8-55dc-bedc-a302156054a4", 00:25:13.582 "is_configured": true, 00:25:13.582 "data_offset": 2048, 00:25:13.582 "data_size": 63488 00:25:13.582 }, 00:25:13.582 { 00:25:13.582 "name": "BaseBdev3", 00:25:13.582 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:13.582 "is_configured": true, 00:25:13.582 "data_offset": 2048, 00:25:13.582 "data_size": 63488 00:25:13.582 }, 00:25:13.582 { 00:25:13.582 "name": "BaseBdev4", 00:25:13.582 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:13.582 "is_configured": true, 00:25:13.582 "data_offset": 2048, 00:25:13.582 "data_size": 63488 00:25:13.582 } 00:25:13.582 ] 00:25:13.582 }' 00:25:13.582 12:44:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:13.582 12:44:55 -- common/autotest_common.sh@10 -- # set +x 00:25:14.148 12:44:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:14.148 12:44:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:14.148 12:44:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:14.148 12:44:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:14.148 12:44:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:14.148 12:44:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.148 12:44:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.406 12:44:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:14.406 "name": "raid_bdev1", 00:25:14.406 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:14.406 "strip_size_kb": 0, 00:25:14.406 "state": "online", 00:25:14.406 "raid_level": "raid1", 00:25:14.406 "superblock": true, 00:25:14.406 "num_base_bdevs": 4, 00:25:14.406 "num_base_bdevs_discovered": 3, 00:25:14.406 "num_base_bdevs_operational": 3, 00:25:14.406 "base_bdevs_list": [ 00:25:14.406 { 00:25:14.406 "name": null, 00:25:14.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.406 "is_configured": false, 00:25:14.406 "data_offset": 2048, 00:25:14.406 "data_size": 63488 00:25:14.406 }, 00:25:14.406 { 00:25:14.406 "name": "BaseBdev2", 00:25:14.406 "uuid": "6c54862e-12e8-55dc-bedc-a302156054a4", 00:25:14.406 "is_configured": true, 00:25:14.406 "data_offset": 2048, 00:25:14.406 "data_size": 63488 00:25:14.406 }, 00:25:14.406 { 00:25:14.406 "name": "BaseBdev3", 00:25:14.406 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:14.406 "is_configured": true, 00:25:14.406 "data_offset": 2048, 00:25:14.406 "data_size": 63488 00:25:14.406 }, 00:25:14.406 { 00:25:14.406 "name": "BaseBdev4", 00:25:14.406 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:14.406 "is_configured": true, 00:25:14.406 "data_offset": 2048, 00:25:14.406 "data_size": 63488 00:25:14.406 } 00:25:14.406 ] 00:25:14.406 }' 00:25:14.406 12:44:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:14.406 12:44:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:14.406 12:44:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:14.406 12:44:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:14.406 12:44:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:14.665 [2024-10-01 12:44:56.982059] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:14.665 [2024-10-01 12:44:56.982128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:14.665 12:44:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:14.665 [2024-10-01 12:44:57.040414] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:14.665 [2024-10-01 12:44:57.042595] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:14.665 [2024-10-01 12:44:57.145179] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:14.665 [2024-10-01 12:44:57.145611] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:14.924 [2024-10-01 12:44:57.268690] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:14.924 [2024-10-01 12:44:57.268996] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:15.184 [2024-10-01 12:44:57.603520] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:15.184 [2024-10-01 12:44:57.603864] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:15.442 [2024-10-01 12:44:57.731168] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:15.443 [2024-10-01 12:44:57.731373] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:15.443 [2024-10-01 12:44:57.962815] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:15.702 12:44:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.702 12:44:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:15.702 12:44:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:15.702 12:44:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:15.702 12:44:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:15.702 12:44:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.702 12:44:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.702 [2024-10-01 12:44:58.170553] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:15.702 [2024-10-01 12:44:58.170796] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:15.702 12:44:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:15.702 "name": "raid_bdev1", 00:25:15.702 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:15.702 "strip_size_kb": 0, 00:25:15.702 "state": "online", 00:25:15.702 "raid_level": "raid1", 00:25:15.702 "superblock": true, 00:25:15.702 "num_base_bdevs": 4, 00:25:15.702 "num_base_bdevs_discovered": 4, 00:25:15.702 "num_base_bdevs_operational": 4, 00:25:15.702 "process": { 00:25:15.702 "type": "rebuild", 00:25:15.702 "target": "spare", 00:25:15.702 "progress": { 00:25:15.702 "blocks": 16384, 00:25:15.702 "percent": 25 00:25:15.702 } 00:25:15.702 }, 00:25:15.702 "base_bdevs_list": [ 00:25:15.702 { 00:25:15.702 "name": "spare", 00:25:15.702 "uuid": "30e63a72-cdf7-5483-a0d2-f36270f25e8e", 00:25:15.702 "is_configured": true, 00:25:15.702 "data_offset": 2048, 00:25:15.702 "data_size": 63488 00:25:15.702 }, 00:25:15.702 { 00:25:15.702 "name": "BaseBdev2", 00:25:15.702 "uuid": "6c54862e-12e8-55dc-bedc-a302156054a4", 00:25:15.702 "is_configured": true, 00:25:15.702 "data_offset": 2048, 00:25:15.702 "data_size": 63488 00:25:15.702 }, 00:25:15.702 { 00:25:15.702 "name": "BaseBdev3", 00:25:15.702 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:15.702 "is_configured": true, 00:25:15.702 "data_offset": 2048, 00:25:15.702 "data_size": 63488 00:25:15.702 }, 00:25:15.702 { 00:25:15.702 "name": "BaseBdev4", 00:25:15.702 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:15.702 "is_configured": true, 00:25:15.702 "data_offset": 2048, 00:25:15.702 "data_size": 63488 00:25:15.702 } 00:25:15.702 ] 00:25:15.702 }' 00:25:15.702 12:44:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:15.961 12:44:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.961 12:44:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:15.961 12:44:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.961 12:44:58 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:15.961 12:44:58 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:15.961 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:15.961 12:44:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:15.961 12:44:58 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:25:15.961 12:44:58 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:25:15.961 12:44:58 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:15.961 [2024-10-01 12:44:58.477995] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:16.220 [2024-10-01 12:44:58.612603] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005d40 00:25:16.220 [2024-10-01 12:44:58.612642] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005fb0 00:25:16.220 [2024-10-01 12:44:58.735049] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:16.479 "name": "raid_bdev1", 00:25:16.479 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:16.479 "strip_size_kb": 0, 00:25:16.479 "state": "online", 00:25:16.479 "raid_level": "raid1", 00:25:16.479 "superblock": true, 00:25:16.479 "num_base_bdevs": 4, 00:25:16.479 "num_base_bdevs_discovered": 3, 00:25:16.479 "num_base_bdevs_operational": 3, 00:25:16.479 "process": { 00:25:16.479 "type": "rebuild", 00:25:16.479 "target": "spare", 00:25:16.479 "progress": { 00:25:16.479 "blocks": 22528, 00:25:16.479 "percent": 35 00:25:16.479 } 00:25:16.479 }, 00:25:16.479 "base_bdevs_list": [ 00:25:16.479 { 00:25:16.479 "name": "spare", 00:25:16.479 "uuid": "30e63a72-cdf7-5483-a0d2-f36270f25e8e", 00:25:16.479 "is_configured": true, 00:25:16.479 "data_offset": 2048, 00:25:16.479 "data_size": 63488 00:25:16.479 }, 00:25:16.479 { 00:25:16.479 "name": null, 00:25:16.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.479 "is_configured": false, 00:25:16.479 "data_offset": 2048, 00:25:16.479 "data_size": 63488 00:25:16.479 }, 00:25:16.479 { 00:25:16.479 "name": "BaseBdev3", 00:25:16.479 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:16.479 "is_configured": true, 00:25:16.479 "data_offset": 2048, 00:25:16.479 "data_size": 63488 00:25:16.479 }, 00:25:16.479 { 00:25:16.479 "name": "BaseBdev4", 00:25:16.479 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:16.479 "is_configured": true, 00:25:16.479 "data_offset": 2048, 00:25:16.479 "data_size": 63488 00:25:16.479 } 00:25:16.479 ] 00:25:16.479 }' 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:16.479 12:44:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@657 -- # local timeout=486 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.738 [2024-10-01 12:44:59.185946] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:16.738 "name": "raid_bdev1", 00:25:16.738 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:16.738 "strip_size_kb": 0, 00:25:16.738 "state": "online", 00:25:16.738 "raid_level": "raid1", 00:25:16.738 "superblock": true, 00:25:16.738 "num_base_bdevs": 4, 00:25:16.738 "num_base_bdevs_discovered": 3, 00:25:16.738 "num_base_bdevs_operational": 3, 00:25:16.738 "process": { 00:25:16.738 "type": "rebuild", 00:25:16.738 "target": "spare", 00:25:16.738 "progress": { 00:25:16.738 "blocks": 28672, 00:25:16.738 "percent": 45 00:25:16.738 } 00:25:16.738 }, 00:25:16.738 "base_bdevs_list": [ 00:25:16.738 { 00:25:16.738 "name": "spare", 00:25:16.738 "uuid": "30e63a72-cdf7-5483-a0d2-f36270f25e8e", 00:25:16.738 "is_configured": true, 00:25:16.738 "data_offset": 2048, 00:25:16.738 "data_size": 63488 00:25:16.738 }, 00:25:16.738 { 00:25:16.738 "name": null, 00:25:16.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.738 "is_configured": false, 00:25:16.738 "data_offset": 2048, 00:25:16.738 "data_size": 63488 00:25:16.738 }, 00:25:16.738 { 00:25:16.738 "name": "BaseBdev3", 00:25:16.738 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:16.738 "is_configured": true, 00:25:16.738 "data_offset": 2048, 00:25:16.738 "data_size": 63488 00:25:16.738 }, 00:25:16.738 { 00:25:16.738 "name": "BaseBdev4", 00:25:16.738 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:16.738 "is_configured": true, 00:25:16.738 "data_offset": 2048, 00:25:16.738 "data_size": 63488 00:25:16.738 } 00:25:16.738 ] 00:25:16.738 }' 00:25:16.738 12:44:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:16.997 12:44:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:16.997 12:44:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:16.997 12:44:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.997 12:44:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:16.997 [2024-10-01 12:44:59.418741] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:17.256 [2024-10-01 12:44:59.633518] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:17.515 [2024-10-01 12:44:59.949346] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.103 [2024-10-01 12:45:00.368637] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:25:18.103 [2024-10-01 12:45:00.368997] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:25:18.103 [2024-10-01 12:45:00.493511] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:18.103 "name": "raid_bdev1", 00:25:18.103 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:18.103 "strip_size_kb": 0, 00:25:18.103 "state": "online", 00:25:18.103 "raid_level": "raid1", 00:25:18.103 "superblock": true, 00:25:18.103 "num_base_bdevs": 4, 00:25:18.103 "num_base_bdevs_discovered": 3, 00:25:18.103 "num_base_bdevs_operational": 3, 00:25:18.103 "process": { 00:25:18.103 "type": "rebuild", 00:25:18.103 "target": "spare", 00:25:18.103 "progress": { 00:25:18.103 "blocks": 47104, 00:25:18.103 "percent": 74 00:25:18.103 } 00:25:18.103 }, 00:25:18.103 "base_bdevs_list": [ 00:25:18.103 { 00:25:18.103 "name": "spare", 00:25:18.103 "uuid": "30e63a72-cdf7-5483-a0d2-f36270f25e8e", 00:25:18.103 "is_configured": true, 00:25:18.103 "data_offset": 2048, 00:25:18.103 "data_size": 63488 00:25:18.103 }, 00:25:18.103 { 00:25:18.103 "name": null, 00:25:18.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.103 "is_configured": false, 00:25:18.103 "data_offset": 2048, 00:25:18.103 "data_size": 63488 00:25:18.103 }, 00:25:18.103 { 00:25:18.103 "name": "BaseBdev3", 00:25:18.103 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:18.103 "is_configured": true, 00:25:18.103 "data_offset": 2048, 00:25:18.103 "data_size": 63488 00:25:18.103 }, 00:25:18.103 { 00:25:18.103 "name": "BaseBdev4", 00:25:18.103 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:18.103 "is_configured": true, 00:25:18.103 "data_offset": 2048, 00:25:18.103 "data_size": 63488 00:25:18.103 } 00:25:18.103 ] 00:25:18.103 }' 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.103 12:45:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:18.362 [2024-10-01 12:45:00.824549] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:19.299 [2024-10-01 12:45:01.489533] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:19.299 [2024-10-01 12:45:01.594633] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:19.299 [2024-10-01 12:45:01.598363] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.299 12:45:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:19.299 12:45:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:19.299 12:45:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:19.299 12:45:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:19.299 12:45:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:19.299 12:45:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:19.299 12:45:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.299 12:45:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:19.558 "name": "raid_bdev1", 00:25:19.558 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:19.558 "strip_size_kb": 0, 00:25:19.558 "state": "online", 00:25:19.558 "raid_level": "raid1", 00:25:19.558 "superblock": true, 00:25:19.558 "num_base_bdevs": 4, 00:25:19.558 "num_base_bdevs_discovered": 3, 00:25:19.558 "num_base_bdevs_operational": 3, 00:25:19.558 "base_bdevs_list": [ 00:25:19.558 { 00:25:19.558 "name": "spare", 00:25:19.558 "uuid": "30e63a72-cdf7-5483-a0d2-f36270f25e8e", 00:25:19.558 "is_configured": true, 00:25:19.558 "data_offset": 2048, 00:25:19.558 "data_size": 63488 00:25:19.558 }, 00:25:19.558 { 00:25:19.558 "name": null, 00:25:19.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.558 "is_configured": false, 00:25:19.558 "data_offset": 2048, 00:25:19.558 "data_size": 63488 00:25:19.558 }, 00:25:19.558 { 00:25:19.558 "name": "BaseBdev3", 00:25:19.558 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:19.558 "is_configured": true, 00:25:19.558 "data_offset": 2048, 00:25:19.558 "data_size": 63488 00:25:19.558 }, 00:25:19.558 { 00:25:19.558 "name": "BaseBdev4", 00:25:19.558 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:19.558 "is_configured": true, 00:25:19.558 "data_offset": 2048, 00:25:19.558 "data_size": 63488 00:25:19.558 } 00:25:19.558 ] 00:25:19.558 }' 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@660 -- # break 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.558 12:45:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:19.817 "name": "raid_bdev1", 00:25:19.817 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:19.817 "strip_size_kb": 0, 00:25:19.817 "state": "online", 00:25:19.817 "raid_level": "raid1", 00:25:19.817 "superblock": true, 00:25:19.817 "num_base_bdevs": 4, 00:25:19.817 "num_base_bdevs_discovered": 3, 00:25:19.817 "num_base_bdevs_operational": 3, 00:25:19.817 "base_bdevs_list": [ 00:25:19.817 { 00:25:19.817 "name": "spare", 00:25:19.817 "uuid": "30e63a72-cdf7-5483-a0d2-f36270f25e8e", 00:25:19.817 "is_configured": true, 00:25:19.817 "data_offset": 2048, 00:25:19.817 "data_size": 63488 00:25:19.817 }, 00:25:19.817 { 00:25:19.817 "name": null, 00:25:19.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.817 "is_configured": false, 00:25:19.817 "data_offset": 2048, 00:25:19.817 "data_size": 63488 00:25:19.817 }, 00:25:19.817 { 00:25:19.817 "name": "BaseBdev3", 00:25:19.817 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:19.817 "is_configured": true, 00:25:19.817 "data_offset": 2048, 00:25:19.817 "data_size": 63488 00:25:19.817 }, 00:25:19.817 { 00:25:19.817 "name": "BaseBdev4", 00:25:19.817 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:19.817 "is_configured": true, 00:25:19.817 "data_offset": 2048, 00:25:19.817 "data_size": 63488 00:25:19.817 } 00:25:19.817 ] 00:25:19.817 }' 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.817 12:45:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.077 12:45:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:20.077 "name": "raid_bdev1", 00:25:20.077 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:20.077 "strip_size_kb": 0, 00:25:20.077 "state": "online", 00:25:20.077 "raid_level": "raid1", 00:25:20.077 "superblock": true, 00:25:20.077 "num_base_bdevs": 4, 00:25:20.077 "num_base_bdevs_discovered": 3, 00:25:20.077 "num_base_bdevs_operational": 3, 00:25:20.077 "base_bdevs_list": [ 00:25:20.077 { 00:25:20.077 "name": "spare", 00:25:20.077 "uuid": "30e63a72-cdf7-5483-a0d2-f36270f25e8e", 00:25:20.077 "is_configured": true, 00:25:20.077 "data_offset": 2048, 00:25:20.077 "data_size": 63488 00:25:20.077 }, 00:25:20.077 { 00:25:20.077 "name": null, 00:25:20.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.077 "is_configured": false, 00:25:20.077 "data_offset": 2048, 00:25:20.077 "data_size": 63488 00:25:20.077 }, 00:25:20.077 { 00:25:20.077 "name": "BaseBdev3", 00:25:20.077 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:20.077 "is_configured": true, 00:25:20.077 "data_offset": 2048, 00:25:20.077 "data_size": 63488 00:25:20.077 }, 00:25:20.077 { 00:25:20.077 "name": "BaseBdev4", 00:25:20.077 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:20.077 "is_configured": true, 00:25:20.077 "data_offset": 2048, 00:25:20.077 "data_size": 63488 00:25:20.077 } 00:25:20.077 ] 00:25:20.077 }' 00:25:20.077 12:45:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:20.077 12:45:02 -- common/autotest_common.sh@10 -- # set +x 00:25:20.647 12:45:02 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:20.647 [2024-10-01 12:45:03.149156] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:20.647 [2024-10-01 12:45:03.149201] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:20.907 00:25:20.907 Latency(us) 00:25:20.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.907 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:20.907 raid_bdev1 : 10.14 103.13 309.40 0.00 0.00 13245.45 302.68 109489.86 00:25:20.907 =================================================================================================================== 00:25:20.907 Total : 103.13 309.40 0.00 0.00 13245.45 302.68 109489.86 00:25:20.907 [2024-10-01 12:45:03.235371] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.907 [2024-10-01 12:45:03.235413] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.907 [2024-10-01 12:45:03.235512] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:20.907 [2024-10-01 12:45:03.235523] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:25:20.907 0 00:25:20.907 12:45:03 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:20.907 12:45:03 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.166 12:45:03 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:21.166 12:45:03 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:25:21.166 12:45:03 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:21.166 12:45:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:21.166 12:45:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:21.166 12:45:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:21.166 12:45:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:21.166 12:45:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:21.166 12:45:03 -- bdev/nbd_common.sh@12 -- # local i 00:25:21.166 12:45:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:21.166 12:45:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:21.166 12:45:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:21.166 /dev/nbd0 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:21.426 12:45:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:21.426 12:45:03 -- common/autotest_common.sh@857 -- # local i 00:25:21.426 12:45:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:21.426 12:45:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:21.426 12:45:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:21.426 12:45:03 -- common/autotest_common.sh@861 -- # break 00:25:21.426 12:45:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:21.426 12:45:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:21.426 12:45:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:21.426 1+0 records in 00:25:21.426 1+0 records out 00:25:21.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310458 s, 13.2 MB/s 00:25:21.426 12:45:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.426 12:45:03 -- common/autotest_common.sh@874 -- # size=4096 00:25:21.426 12:45:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.426 12:45:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:21.426 12:45:03 -- common/autotest_common.sh@877 -- # return 0 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:21.426 12:45:03 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:21.426 12:45:03 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:25:21.426 12:45:03 -- bdev/bdev_raid.sh@678 -- # continue 00:25:21.426 12:45:03 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:21.426 12:45:03 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:25:21.426 12:45:03 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@12 -- # local i 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:21.426 12:45:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:21.426 /dev/nbd1 00:25:21.686 12:45:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:21.686 12:45:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:21.686 12:45:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:21.686 12:45:03 -- common/autotest_common.sh@857 -- # local i 00:25:21.686 12:45:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:21.686 12:45:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:21.686 12:45:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:21.686 12:45:03 -- common/autotest_common.sh@861 -- # break 00:25:21.686 12:45:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:21.686 12:45:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:21.686 12:45:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:21.686 1+0 records in 00:25:21.686 1+0 records out 00:25:21.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613747 s, 6.7 MB/s 00:25:21.686 12:45:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.686 12:45:04 -- common/autotest_common.sh@874 -- # size=4096 00:25:21.686 12:45:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.686 12:45:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:21.686 12:45:04 -- common/autotest_common.sh@877 -- # return 0 00:25:21.686 12:45:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:21.686 12:45:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:21.686 12:45:04 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:21.686 12:45:04 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:21.686 12:45:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:21.686 12:45:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:21.686 12:45:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:21.686 12:45:04 -- bdev/nbd_common.sh@51 -- # local i 00:25:21.686 12:45:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:21.686 12:45:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@41 -- # break 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@45 -- # return 0 00:25:21.945 12:45:04 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:21.945 12:45:04 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:25:21.945 12:45:04 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@12 -- # local i 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:21.945 12:45:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:22.204 /dev/nbd1 00:25:22.204 12:45:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:22.204 12:45:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:22.204 12:45:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:22.204 12:45:04 -- common/autotest_common.sh@857 -- # local i 00:25:22.204 12:45:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:22.204 12:45:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:22.204 12:45:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:22.204 12:45:04 -- common/autotest_common.sh@861 -- # break 00:25:22.204 12:45:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:22.204 12:45:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:22.204 12:45:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:22.204 1+0 records in 00:25:22.204 1+0 records out 00:25:22.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434124 s, 9.4 MB/s 00:25:22.204 12:45:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:22.204 12:45:04 -- common/autotest_common.sh@874 -- # size=4096 00:25:22.204 12:45:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:22.204 12:45:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:22.204 12:45:04 -- common/autotest_common.sh@877 -- # return 0 00:25:22.204 12:45:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:22.204 12:45:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:22.204 12:45:04 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:22.463 12:45:04 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@51 -- # local i 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@41 -- # break 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@45 -- # return 0 00:25:22.463 12:45:04 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@51 -- # local i 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:22.463 12:45:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:22.721 12:45:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:22.721 12:45:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:22.721 12:45:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:22.721 12:45:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:22.721 12:45:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:22.721 12:45:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:22.721 12:45:05 -- bdev/nbd_common.sh@41 -- # break 00:25:22.721 12:45:05 -- bdev/nbd_common.sh@45 -- # return 0 00:25:22.721 12:45:05 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:22.721 12:45:05 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:22.721 12:45:05 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:22.721 12:45:05 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:22.979 12:45:05 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:23.247 [2024-10-01 12:45:05.554483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:23.247 [2024-10-01 12:45:05.554771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.248 [2024-10-01 12:45:05.554851] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:23.248 [2024-10-01 12:45:05.554954] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.248 [2024-10-01 12:45:05.557591] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.248 [2024-10-01 12:45:05.557780] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:23.248 [2024-10-01 12:45:05.557963] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:23.248 [2024-10-01 12:45:05.558091] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:23.248 BaseBdev1 00:25:23.248 12:45:05 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:23.248 12:45:05 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:25:23.248 12:45:05 -- bdev/bdev_raid.sh@696 -- # continue 00:25:23.248 12:45:05 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:23.248 12:45:05 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:23.248 12:45:05 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:23.511 12:45:05 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:23.511 [2024-10-01 12:45:05.950237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:23.511 [2024-10-01 12:45:05.950455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.511 [2024-10-01 12:45:05.950526] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:23.511 [2024-10-01 12:45:05.950619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.511 [2024-10-01 12:45:05.951087] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.511 [2024-10-01 12:45:05.951274] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:23.511 [2024-10-01 12:45:05.951459] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:23.511 [2024-10-01 12:45:05.951538] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:25:23.511 [2024-10-01 12:45:05.951607] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:23.511 [2024-10-01 12:45:05.951650] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:25:23.511 [2024-10-01 12:45:05.951747] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:23.511 BaseBdev3 00:25:23.511 12:45:05 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:23.511 12:45:05 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:25:23.511 12:45:05 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:25:23.770 12:45:06 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:24.028 [2024-10-01 12:45:06.325692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:24.028 [2024-10-01 12:45:06.325888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.028 [2024-10-01 12:45:06.325947] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:24.028 [2024-10-01 12:45:06.326036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.028 [2024-10-01 12:45:06.326473] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.028 [2024-10-01 12:45:06.326808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:24.028 [2024-10-01 12:45:06.326923] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:25:24.028 [2024-10-01 12:45:06.326962] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:24.028 BaseBdev4 00:25:24.028 12:45:06 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:24.028 12:45:06 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:24.292 [2024-10-01 12:45:06.701211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:24.292 [2024-10-01 12:45:06.701358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.292 [2024-10-01 12:45:06.701410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:25:24.292 [2024-10-01 12:45:06.701503] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.292 [2024-10-01 12:45:06.701924] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.292 [2024-10-01 12:45:06.702059] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:24.292 [2024-10-01 12:45:06.702219] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:24.292 [2024-10-01 12:45:06.702329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:24.292 spare 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.292 12:45:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.292 [2024-10-01 12:45:06.802336] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:25:24.292 [2024-10-01 12:45:06.802452] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:24.292 [2024-10-01 12:45:06.802628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:25:24.292 [2024-10-01 12:45:06.803015] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:25:24.292 [2024-10-01 12:45:06.803098] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:25:24.292 [2024-10-01 12:45:06.803277] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:24.550 12:45:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:24.550 "name": "raid_bdev1", 00:25:24.550 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:24.550 "strip_size_kb": 0, 00:25:24.550 "state": "online", 00:25:24.550 "raid_level": "raid1", 00:25:24.550 "superblock": true, 00:25:24.550 "num_base_bdevs": 4, 00:25:24.550 "num_base_bdevs_discovered": 3, 00:25:24.550 "num_base_bdevs_operational": 3, 00:25:24.550 "base_bdevs_list": [ 00:25:24.550 { 00:25:24.550 "name": "spare", 00:25:24.550 "uuid": "30e63a72-cdf7-5483-a0d2-f36270f25e8e", 00:25:24.550 "is_configured": true, 00:25:24.550 "data_offset": 2048, 00:25:24.550 "data_size": 63488 00:25:24.550 }, 00:25:24.550 { 00:25:24.550 "name": null, 00:25:24.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.550 "is_configured": false, 00:25:24.550 "data_offset": 2048, 00:25:24.550 "data_size": 63488 00:25:24.550 }, 00:25:24.551 { 00:25:24.551 "name": "BaseBdev3", 00:25:24.551 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:24.551 "is_configured": true, 00:25:24.551 "data_offset": 2048, 00:25:24.551 "data_size": 63488 00:25:24.551 }, 00:25:24.551 { 00:25:24.551 "name": "BaseBdev4", 00:25:24.551 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:24.551 "is_configured": true, 00:25:24.551 "data_offset": 2048, 00:25:24.551 "data_size": 63488 00:25:24.551 } 00:25:24.551 ] 00:25:24.551 }' 00:25:24.551 12:45:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:24.551 12:45:06 -- common/autotest_common.sh@10 -- # set +x 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:25.119 "name": "raid_bdev1", 00:25:25.119 "uuid": "04f114d9-46f0-47c5-b7b2-86b3b2b5dfd0", 00:25:25.119 "strip_size_kb": 0, 00:25:25.119 "state": "online", 00:25:25.119 "raid_level": "raid1", 00:25:25.119 "superblock": true, 00:25:25.119 "num_base_bdevs": 4, 00:25:25.119 "num_base_bdevs_discovered": 3, 00:25:25.119 "num_base_bdevs_operational": 3, 00:25:25.119 "base_bdevs_list": [ 00:25:25.119 { 00:25:25.119 "name": "spare", 00:25:25.119 "uuid": "30e63a72-cdf7-5483-a0d2-f36270f25e8e", 00:25:25.119 "is_configured": true, 00:25:25.119 "data_offset": 2048, 00:25:25.119 "data_size": 63488 00:25:25.119 }, 00:25:25.119 { 00:25:25.119 "name": null, 00:25:25.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.119 "is_configured": false, 00:25:25.119 "data_offset": 2048, 00:25:25.119 "data_size": 63488 00:25:25.119 }, 00:25:25.119 { 00:25:25.119 "name": "BaseBdev3", 00:25:25.119 "uuid": "2dbb0f88-4e10-523f-a5a0-f22b5a0010b6", 00:25:25.119 "is_configured": true, 00:25:25.119 "data_offset": 2048, 00:25:25.119 "data_size": 63488 00:25:25.119 }, 00:25:25.119 { 00:25:25.119 "name": "BaseBdev4", 00:25:25.119 "uuid": "1d14052b-c341-514c-8dba-13c7b19de216", 00:25:25.119 "is_configured": true, 00:25:25.119 "data_offset": 2048, 00:25:25.119 "data_size": 63488 00:25:25.119 } 00:25:25.119 ] 00:25:25.119 }' 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:25.119 12:45:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:25.378 12:45:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:25.378 12:45:07 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.378 12:45:07 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:25.378 12:45:07 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:25.378 12:45:07 -- bdev/bdev_raid.sh@709 -- # killprocess 126620 00:25:25.378 12:45:07 -- common/autotest_common.sh@926 -- # '[' -z 126620 ']' 00:25:25.378 12:45:07 -- common/autotest_common.sh@930 -- # kill -0 126620 00:25:25.378 12:45:07 -- common/autotest_common.sh@931 -- # uname 00:25:25.638 12:45:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:25.638 12:45:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126620 00:25:25.638 killing process with pid 126620 00:25:25.638 Received shutdown signal, test time was about 14.877817 seconds 00:25:25.638 00:25:25.638 Latency(us) 00:25:25.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.638 =================================================================================================================== 00:25:25.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.638 12:45:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:25.639 12:45:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:25.639 12:45:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126620' 00:25:25.639 12:45:07 -- common/autotest_common.sh@945 -- # kill 126620 00:25:25.639 [2024-10-01 12:45:07.940912] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:25.639 12:45:07 -- common/autotest_common.sh@950 -- # wait 126620 00:25:25.639 [2024-10-01 12:45:07.940967] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.639 [2024-10-01 12:45:07.941026] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.639 [2024-10-01 12:45:07.941034] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:25:25.898 [2024-10-01 12:45:08.379485] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:27.805 ************************************ 00:25:27.805 END TEST raid_rebuild_test_sb_io 00:25:27.805 ************************************ 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:27.805 00:25:27.805 real 0m21.955s 00:25:27.805 user 0m32.842s 00:25:27.805 sys 0m3.616s 00:25:27.805 12:45:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.805 12:45:09 -- common/autotest_common.sh@10 -- # set +x 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:25:27.805 12:45:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:25:27.805 12:45:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:27.805 12:45:09 -- common/autotest_common.sh@10 -- # set +x 00:25:27.805 ************************************ 00:25:27.805 START TEST raid5f_state_function_test 00:25:27.805 ************************************ 00:25:27.805 12:45:09 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=127224 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:27.805 Process raid pid: 127224 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127224' 00:25:27.805 12:45:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127224 /var/tmp/spdk-raid.sock 00:25:27.805 12:45:09 -- common/autotest_common.sh@819 -- # '[' -z 127224 ']' 00:25:27.805 12:45:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:27.805 12:45:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:27.805 12:45:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:27.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:27.805 12:45:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:27.805 12:45:09 -- common/autotest_common.sh@10 -- # set +x 00:25:27.805 [2024-10-01 12:45:10.042474] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:27.805 [2024-10-01 12:45:10.042800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.805 [2024-10-01 12:45:10.211486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.064 [2024-10-01 12:45:10.395829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.064 [2024-10-01 12:45:10.578499] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:28.631 12:45:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:28.631 12:45:10 -- common/autotest_common.sh@852 -- # return 0 00:25:28.631 12:45:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:28.631 [2024-10-01 12:45:11.043724] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:28.631 [2024-10-01 12:45:11.044048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:28.631 [2024-10-01 12:45:11.044135] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:28.631 [2024-10-01 12:45:11.044193] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:28.631 [2024-10-01 12:45:11.044219] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:28.631 [2024-10-01 12:45:11.044345] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.631 12:45:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.890 12:45:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:28.890 "name": "Existed_Raid", 00:25:28.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.890 "strip_size_kb": 64, 00:25:28.890 "state": "configuring", 00:25:28.890 "raid_level": "raid5f", 00:25:28.890 "superblock": false, 00:25:28.890 "num_base_bdevs": 3, 00:25:28.890 "num_base_bdevs_discovered": 0, 00:25:28.890 "num_base_bdevs_operational": 3, 00:25:28.890 "base_bdevs_list": [ 00:25:28.890 { 00:25:28.890 "name": "BaseBdev1", 00:25:28.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.890 "is_configured": false, 00:25:28.890 "data_offset": 0, 00:25:28.890 "data_size": 0 00:25:28.890 }, 00:25:28.890 { 00:25:28.890 "name": "BaseBdev2", 00:25:28.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.890 "is_configured": false, 00:25:28.890 "data_offset": 0, 00:25:28.890 "data_size": 0 00:25:28.890 }, 00:25:28.890 { 00:25:28.890 "name": "BaseBdev3", 00:25:28.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.890 "is_configured": false, 00:25:28.890 "data_offset": 0, 00:25:28.890 "data_size": 0 00:25:28.890 } 00:25:28.890 ] 00:25:28.890 }' 00:25:28.890 12:45:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:28.890 12:45:11 -- common/autotest_common.sh@10 -- # set +x 00:25:29.459 12:45:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:29.459 [2024-10-01 12:45:11.942330] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.459 [2024-10-01 12:45:11.942504] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:25:29.459 12:45:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:29.720 [2024-10-01 12:45:12.138068] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:29.720 [2024-10-01 12:45:12.138240] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:29.720 [2024-10-01 12:45:12.138355] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:29.720 [2024-10-01 12:45:12.138424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:29.720 [2024-10-01 12:45:12.138451] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:29.720 [2024-10-01 12:45:12.138495] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:29.720 12:45:12 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:29.979 [2024-10-01 12:45:12.358850] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.979 BaseBdev1 00:25:29.979 12:45:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:29.979 12:45:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:29.979 12:45:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:29.979 12:45:12 -- common/autotest_common.sh@889 -- # local i 00:25:29.979 12:45:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:29.979 12:45:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:29.979 12:45:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:30.238 12:45:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:30.238 [ 00:25:30.238 { 00:25:30.238 "name": "BaseBdev1", 00:25:30.238 "aliases": [ 00:25:30.238 "e06743ad-af85-4a46-a435-d3cf102c70d6" 00:25:30.238 ], 00:25:30.238 "product_name": "Malloc disk", 00:25:30.238 "block_size": 512, 00:25:30.238 "num_blocks": 65536, 00:25:30.238 "uuid": "e06743ad-af85-4a46-a435-d3cf102c70d6", 00:25:30.238 "assigned_rate_limits": { 00:25:30.238 "rw_ios_per_sec": 0, 00:25:30.238 "rw_mbytes_per_sec": 0, 00:25:30.238 "r_mbytes_per_sec": 0, 00:25:30.238 "w_mbytes_per_sec": 0 00:25:30.238 }, 00:25:30.238 "claimed": true, 00:25:30.238 "claim_type": "exclusive_write", 00:25:30.238 "zoned": false, 00:25:30.238 "supported_io_types": { 00:25:30.238 "read": true, 00:25:30.238 "write": true, 00:25:30.238 "unmap": true, 00:25:30.238 "write_zeroes": true, 00:25:30.238 "flush": true, 00:25:30.238 "reset": true, 00:25:30.238 "compare": false, 00:25:30.238 "compare_and_write": false, 00:25:30.238 "abort": true, 00:25:30.238 "nvme_admin": false, 00:25:30.238 "nvme_io": false 00:25:30.238 }, 00:25:30.238 "memory_domains": [ 00:25:30.238 { 00:25:30.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.238 "dma_device_type": 2 00:25:30.238 } 00:25:30.238 ], 00:25:30.238 "driver_specific": {} 00:25:30.238 } 00:25:30.238 ] 00:25:30.238 12:45:12 -- common/autotest_common.sh@895 -- # return 0 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.238 12:45:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.498 12:45:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:30.498 "name": "Existed_Raid", 00:25:30.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.498 "strip_size_kb": 64, 00:25:30.498 "state": "configuring", 00:25:30.498 "raid_level": "raid5f", 00:25:30.498 "superblock": false, 00:25:30.498 "num_base_bdevs": 3, 00:25:30.498 "num_base_bdevs_discovered": 1, 00:25:30.498 "num_base_bdevs_operational": 3, 00:25:30.498 "base_bdevs_list": [ 00:25:30.498 { 00:25:30.498 "name": "BaseBdev1", 00:25:30.498 "uuid": "e06743ad-af85-4a46-a435-d3cf102c70d6", 00:25:30.498 "is_configured": true, 00:25:30.498 "data_offset": 0, 00:25:30.498 "data_size": 65536 00:25:30.498 }, 00:25:30.498 { 00:25:30.498 "name": "BaseBdev2", 00:25:30.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.498 "is_configured": false, 00:25:30.498 "data_offset": 0, 00:25:30.498 "data_size": 0 00:25:30.498 }, 00:25:30.498 { 00:25:30.498 "name": "BaseBdev3", 00:25:30.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.498 "is_configured": false, 00:25:30.498 "data_offset": 0, 00:25:30.498 "data_size": 0 00:25:30.498 } 00:25:30.498 ] 00:25:30.498 }' 00:25:30.498 12:45:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:30.498 12:45:12 -- common/autotest_common.sh@10 -- # set +x 00:25:31.066 12:45:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:31.326 [2024-10-01 12:45:13.625245] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:31.326 [2024-10-01 12:45:13.625390] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:31.326 [2024-10-01 12:45:13.789067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:31.326 [2024-10-01 12:45:13.791265] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:31.326 [2024-10-01 12:45:13.791425] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:31.326 [2024-10-01 12:45:13.791505] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:31.326 [2024-10-01 12:45:13.791562] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.326 12:45:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.585 12:45:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.585 "name": "Existed_Raid", 00:25:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.585 "strip_size_kb": 64, 00:25:31.585 "state": "configuring", 00:25:31.585 "raid_level": "raid5f", 00:25:31.585 "superblock": false, 00:25:31.585 "num_base_bdevs": 3, 00:25:31.585 "num_base_bdevs_discovered": 1, 00:25:31.585 "num_base_bdevs_operational": 3, 00:25:31.585 "base_bdevs_list": [ 00:25:31.585 { 00:25:31.585 "name": "BaseBdev1", 00:25:31.585 "uuid": "e06743ad-af85-4a46-a435-d3cf102c70d6", 00:25:31.585 "is_configured": true, 00:25:31.585 "data_offset": 0, 00:25:31.585 "data_size": 65536 00:25:31.585 }, 00:25:31.585 { 00:25:31.585 "name": "BaseBdev2", 00:25:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.585 "is_configured": false, 00:25:31.585 "data_offset": 0, 00:25:31.585 "data_size": 0 00:25:31.585 }, 00:25:31.585 { 00:25:31.585 "name": "BaseBdev3", 00:25:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.585 "is_configured": false, 00:25:31.585 "data_offset": 0, 00:25:31.585 "data_size": 0 00:25:31.585 } 00:25:31.585 ] 00:25:31.585 }' 00:25:31.585 12:45:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.585 12:45:13 -- common/autotest_common.sh@10 -- # set +x 00:25:32.155 12:45:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:32.414 [2024-10-01 12:45:14.700245] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:32.414 BaseBdev2 00:25:32.414 12:45:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:32.414 12:45:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:25:32.414 12:45:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:32.414 12:45:14 -- common/autotest_common.sh@889 -- # local i 00:25:32.414 12:45:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:32.414 12:45:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:32.414 12:45:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:32.414 12:45:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:32.674 [ 00:25:32.674 { 00:25:32.674 "name": "BaseBdev2", 00:25:32.674 "aliases": [ 00:25:32.674 "68bac3a2-de90-4169-a272-3ae1644a43aa" 00:25:32.674 ], 00:25:32.674 "product_name": "Malloc disk", 00:25:32.674 "block_size": 512, 00:25:32.674 "num_blocks": 65536, 00:25:32.674 "uuid": "68bac3a2-de90-4169-a272-3ae1644a43aa", 00:25:32.674 "assigned_rate_limits": { 00:25:32.674 "rw_ios_per_sec": 0, 00:25:32.674 "rw_mbytes_per_sec": 0, 00:25:32.674 "r_mbytes_per_sec": 0, 00:25:32.674 "w_mbytes_per_sec": 0 00:25:32.674 }, 00:25:32.674 "claimed": true, 00:25:32.674 "claim_type": "exclusive_write", 00:25:32.674 "zoned": false, 00:25:32.674 "supported_io_types": { 00:25:32.674 "read": true, 00:25:32.674 "write": true, 00:25:32.674 "unmap": true, 00:25:32.674 "write_zeroes": true, 00:25:32.674 "flush": true, 00:25:32.674 "reset": true, 00:25:32.674 "compare": false, 00:25:32.674 "compare_and_write": false, 00:25:32.674 "abort": true, 00:25:32.674 "nvme_admin": false, 00:25:32.674 "nvme_io": false 00:25:32.674 }, 00:25:32.674 "memory_domains": [ 00:25:32.674 { 00:25:32.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.674 "dma_device_type": 2 00:25:32.674 } 00:25:32.674 ], 00:25:32.674 "driver_specific": {} 00:25:32.674 } 00:25:32.674 ] 00:25:32.674 12:45:15 -- common/autotest_common.sh@895 -- # return 0 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.674 12:45:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.933 12:45:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:32.933 "name": "Existed_Raid", 00:25:32.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.933 "strip_size_kb": 64, 00:25:32.933 "state": "configuring", 00:25:32.933 "raid_level": "raid5f", 00:25:32.933 "superblock": false, 00:25:32.933 "num_base_bdevs": 3, 00:25:32.933 "num_base_bdevs_discovered": 2, 00:25:32.933 "num_base_bdevs_operational": 3, 00:25:32.933 "base_bdevs_list": [ 00:25:32.933 { 00:25:32.933 "name": "BaseBdev1", 00:25:32.933 "uuid": "e06743ad-af85-4a46-a435-d3cf102c70d6", 00:25:32.933 "is_configured": true, 00:25:32.933 "data_offset": 0, 00:25:32.933 "data_size": 65536 00:25:32.933 }, 00:25:32.933 { 00:25:32.933 "name": "BaseBdev2", 00:25:32.933 "uuid": "68bac3a2-de90-4169-a272-3ae1644a43aa", 00:25:32.933 "is_configured": true, 00:25:32.933 "data_offset": 0, 00:25:32.933 "data_size": 65536 00:25:32.933 }, 00:25:32.933 { 00:25:32.933 "name": "BaseBdev3", 00:25:32.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.933 "is_configured": false, 00:25:32.933 "data_offset": 0, 00:25:32.933 "data_size": 0 00:25:32.933 } 00:25:32.933 ] 00:25:32.933 }' 00:25:32.933 12:45:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:32.933 12:45:15 -- common/autotest_common.sh@10 -- # set +x 00:25:33.501 12:45:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:33.501 [2024-10-01 12:45:15.997249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:33.501 [2024-10-01 12:45:15.997523] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:25:33.501 [2024-10-01 12:45:15.997567] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:33.501 [2024-10-01 12:45:15.997775] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:25:33.501 [2024-10-01 12:45:16.002081] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:25:33.501 [2024-10-01 12:45:16.002202] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:25:33.501 [2024-10-01 12:45:16.002640] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.501 BaseBdev3 00:25:33.501 12:45:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:33.501 12:45:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:25:33.501 12:45:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:33.501 12:45:16 -- common/autotest_common.sh@889 -- # local i 00:25:33.501 12:45:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:33.501 12:45:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:33.501 12:45:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:33.760 12:45:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:34.020 [ 00:25:34.020 { 00:25:34.020 "name": "BaseBdev3", 00:25:34.020 "aliases": [ 00:25:34.020 "72f382b3-6ead-4b29-aa42-b0bfa8c7c51e" 00:25:34.020 ], 00:25:34.020 "product_name": "Malloc disk", 00:25:34.020 "block_size": 512, 00:25:34.020 "num_blocks": 65536, 00:25:34.020 "uuid": "72f382b3-6ead-4b29-aa42-b0bfa8c7c51e", 00:25:34.020 "assigned_rate_limits": { 00:25:34.020 "rw_ios_per_sec": 0, 00:25:34.020 "rw_mbytes_per_sec": 0, 00:25:34.020 "r_mbytes_per_sec": 0, 00:25:34.020 "w_mbytes_per_sec": 0 00:25:34.020 }, 00:25:34.020 "claimed": true, 00:25:34.020 "claim_type": "exclusive_write", 00:25:34.020 "zoned": false, 00:25:34.020 "supported_io_types": { 00:25:34.020 "read": true, 00:25:34.020 "write": true, 00:25:34.020 "unmap": true, 00:25:34.020 "write_zeroes": true, 00:25:34.020 "flush": true, 00:25:34.020 "reset": true, 00:25:34.020 "compare": false, 00:25:34.020 "compare_and_write": false, 00:25:34.020 "abort": true, 00:25:34.020 "nvme_admin": false, 00:25:34.020 "nvme_io": false 00:25:34.020 }, 00:25:34.020 "memory_domains": [ 00:25:34.020 { 00:25:34.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.020 "dma_device_type": 2 00:25:34.020 } 00:25:34.020 ], 00:25:34.020 "driver_specific": {} 00:25:34.020 } 00:25:34.020 ] 00:25:34.020 12:45:16 -- common/autotest_common.sh@895 -- # return 0 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.020 12:45:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:34.280 12:45:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:34.280 "name": "Existed_Raid", 00:25:34.280 "uuid": "6d8ae3f7-1cf1-4c94-9637-e6405492d4f5", 00:25:34.280 "strip_size_kb": 64, 00:25:34.280 "state": "online", 00:25:34.280 "raid_level": "raid5f", 00:25:34.280 "superblock": false, 00:25:34.280 "num_base_bdevs": 3, 00:25:34.280 "num_base_bdevs_discovered": 3, 00:25:34.280 "num_base_bdevs_operational": 3, 00:25:34.280 "base_bdevs_list": [ 00:25:34.280 { 00:25:34.280 "name": "BaseBdev1", 00:25:34.280 "uuid": "e06743ad-af85-4a46-a435-d3cf102c70d6", 00:25:34.280 "is_configured": true, 00:25:34.280 "data_offset": 0, 00:25:34.280 "data_size": 65536 00:25:34.280 }, 00:25:34.280 { 00:25:34.280 "name": "BaseBdev2", 00:25:34.280 "uuid": "68bac3a2-de90-4169-a272-3ae1644a43aa", 00:25:34.280 "is_configured": true, 00:25:34.280 "data_offset": 0, 00:25:34.280 "data_size": 65536 00:25:34.280 }, 00:25:34.280 { 00:25:34.280 "name": "BaseBdev3", 00:25:34.280 "uuid": "72f382b3-6ead-4b29-aa42-b0bfa8c7c51e", 00:25:34.280 "is_configured": true, 00:25:34.280 "data_offset": 0, 00:25:34.280 "data_size": 65536 00:25:34.280 } 00:25:34.280 ] 00:25:34.280 }' 00:25:34.280 12:45:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:34.280 12:45:16 -- common/autotest_common.sh@10 -- # set +x 00:25:34.539 12:45:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:34.798 [2024-10-01 12:45:17.223541] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:35.057 "name": "Existed_Raid", 00:25:35.057 "uuid": "6d8ae3f7-1cf1-4c94-9637-e6405492d4f5", 00:25:35.057 "strip_size_kb": 64, 00:25:35.057 "state": "online", 00:25:35.057 "raid_level": "raid5f", 00:25:35.057 "superblock": false, 00:25:35.057 "num_base_bdevs": 3, 00:25:35.057 "num_base_bdevs_discovered": 2, 00:25:35.057 "num_base_bdevs_operational": 2, 00:25:35.057 "base_bdevs_list": [ 00:25:35.057 { 00:25:35.057 "name": null, 00:25:35.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.057 "is_configured": false, 00:25:35.057 "data_offset": 0, 00:25:35.057 "data_size": 65536 00:25:35.057 }, 00:25:35.057 { 00:25:35.057 "name": "BaseBdev2", 00:25:35.057 "uuid": "68bac3a2-de90-4169-a272-3ae1644a43aa", 00:25:35.057 "is_configured": true, 00:25:35.057 "data_offset": 0, 00:25:35.057 "data_size": 65536 00:25:35.057 }, 00:25:35.057 { 00:25:35.057 "name": "BaseBdev3", 00:25:35.057 "uuid": "72f382b3-6ead-4b29-aa42-b0bfa8c7c51e", 00:25:35.057 "is_configured": true, 00:25:35.057 "data_offset": 0, 00:25:35.057 "data_size": 65536 00:25:35.057 } 00:25:35.057 ] 00:25:35.057 }' 00:25:35.057 12:45:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:35.057 12:45:17 -- common/autotest_common.sh@10 -- # set +x 00:25:35.669 12:45:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:35.669 12:45:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:35.669 12:45:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:35.669 12:45:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.927 12:45:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:35.927 12:45:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:35.927 12:45:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:35.927 [2024-10-01 12:45:18.413912] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:35.927 [2024-10-01 12:45:18.414058] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.927 [2024-10-01 12:45:18.414274] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:36.186 12:45:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:36.186 12:45:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:36.186 12:45:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.186 12:45:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:36.186 12:45:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:36.186 12:45:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:36.186 12:45:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:36.446 [2024-10-01 12:45:18.885232] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:36.446 [2024-10-01 12:45:18.885432] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:25:36.710 12:45:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:36.710 12:45:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:36.710 12:45:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:36.710 12:45:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.710 12:45:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:36.710 12:45:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:36.710 12:45:19 -- bdev/bdev_raid.sh@287 -- # killprocess 127224 00:25:36.710 12:45:19 -- common/autotest_common.sh@926 -- # '[' -z 127224 ']' 00:25:36.710 12:45:19 -- common/autotest_common.sh@930 -- # kill -0 127224 00:25:36.710 12:45:19 -- common/autotest_common.sh@931 -- # uname 00:25:36.710 12:45:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:36.710 12:45:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127224 00:25:36.710 12:45:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:36.710 12:45:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:36.710 12:45:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127224' 00:25:36.710 killing process with pid 127224 00:25:36.710 12:45:19 -- common/autotest_common.sh@945 -- # kill 127224 00:25:36.710 [2024-10-01 12:45:19.237589] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:36.710 [2024-10-01 12:45:19.237873] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:36.710 12:45:19 -- common/autotest_common.sh@950 -- # wait 127224 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:38.088 00:25:38.088 real 0m10.483s 00:25:38.088 user 0m17.375s 00:25:38.088 sys 0m1.826s 00:25:38.088 12:45:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.088 ************************************ 00:25:38.088 END TEST raid5f_state_function_test 00:25:38.088 ************************************ 00:25:38.088 12:45:20 -- common/autotest_common.sh@10 -- # set +x 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:25:38.088 12:45:20 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:25:38.088 12:45:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:38.088 12:45:20 -- common/autotest_common.sh@10 -- # set +x 00:25:38.088 ************************************ 00:25:38.088 START TEST raid5f_state_function_test_sb 00:25:38.088 ************************************ 00:25:38.088 12:45:20 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=127587 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127587' 00:25:38.088 Process raid pid: 127587 00:25:38.088 12:45:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127587 /var/tmp/spdk-raid.sock 00:25:38.088 12:45:20 -- common/autotest_common.sh@819 -- # '[' -z 127587 ']' 00:25:38.088 12:45:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:38.088 12:45:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:38.088 12:45:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:38.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:38.088 12:45:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:38.088 12:45:20 -- common/autotest_common.sh@10 -- # set +x 00:25:38.088 [2024-10-01 12:45:20.614594] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:38.088 [2024-10-01 12:45:20.614831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.346 [2024-10-01 12:45:20.784305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.605 [2024-10-01 12:45:20.970896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.864 [2024-10-01 12:45:21.152844] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:39.123 12:45:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:39.123 12:45:21 -- common/autotest_common.sh@852 -- # return 0 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:39.123 [2024-10-01 12:45:21.594060] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:39.123 [2024-10-01 12:45:21.594386] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:39.123 [2024-10-01 12:45:21.594491] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:39.123 [2024-10-01 12:45:21.594550] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:39.123 [2024-10-01 12:45:21.594575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:39.123 [2024-10-01 12:45:21.594637] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.123 12:45:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:39.383 12:45:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:39.383 "name": "Existed_Raid", 00:25:39.383 "uuid": "0d8fb593-22d0-4b0a-b373-1aeeceb24050", 00:25:39.383 "strip_size_kb": 64, 00:25:39.383 "state": "configuring", 00:25:39.383 "raid_level": "raid5f", 00:25:39.383 "superblock": true, 00:25:39.383 "num_base_bdevs": 3, 00:25:39.383 "num_base_bdevs_discovered": 0, 00:25:39.383 "num_base_bdevs_operational": 3, 00:25:39.383 "base_bdevs_list": [ 00:25:39.383 { 00:25:39.383 "name": "BaseBdev1", 00:25:39.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.383 "is_configured": false, 00:25:39.383 "data_offset": 0, 00:25:39.383 "data_size": 0 00:25:39.383 }, 00:25:39.383 { 00:25:39.383 "name": "BaseBdev2", 00:25:39.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.383 "is_configured": false, 00:25:39.383 "data_offset": 0, 00:25:39.383 "data_size": 0 00:25:39.383 }, 00:25:39.383 { 00:25:39.383 "name": "BaseBdev3", 00:25:39.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.383 "is_configured": false, 00:25:39.383 "data_offset": 0, 00:25:39.383 "data_size": 0 00:25:39.383 } 00:25:39.383 ] 00:25:39.383 }' 00:25:39.383 12:45:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:39.383 12:45:21 -- common/autotest_common.sh@10 -- # set +x 00:25:39.950 12:45:22 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:39.950 [2024-10-01 12:45:22.464634] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:39.950 [2024-10-01 12:45:22.464816] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:25:39.950 12:45:22 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:40.208 [2024-10-01 12:45:22.616490] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:40.208 [2024-10-01 12:45:22.616670] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:40.208 [2024-10-01 12:45:22.616773] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:40.208 [2024-10-01 12:45:22.616835] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:40.208 [2024-10-01 12:45:22.616861] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:40.208 [2024-10-01 12:45:22.616908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:40.208 12:45:22 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:40.466 [2024-10-01 12:45:22.825240] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:40.467 BaseBdev1 00:25:40.467 12:45:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:40.467 12:45:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:40.467 12:45:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:40.467 12:45:22 -- common/autotest_common.sh@889 -- # local i 00:25:40.467 12:45:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:40.467 12:45:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:40.467 12:45:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:40.725 12:45:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:40.725 [ 00:25:40.725 { 00:25:40.725 "name": "BaseBdev1", 00:25:40.725 "aliases": [ 00:25:40.725 "be479915-da64-4712-a2fe-141a38fa65fe" 00:25:40.725 ], 00:25:40.725 "product_name": "Malloc disk", 00:25:40.725 "block_size": 512, 00:25:40.725 "num_blocks": 65536, 00:25:40.725 "uuid": "be479915-da64-4712-a2fe-141a38fa65fe", 00:25:40.725 "assigned_rate_limits": { 00:25:40.725 "rw_ios_per_sec": 0, 00:25:40.725 "rw_mbytes_per_sec": 0, 00:25:40.725 "r_mbytes_per_sec": 0, 00:25:40.725 "w_mbytes_per_sec": 0 00:25:40.725 }, 00:25:40.725 "claimed": true, 00:25:40.725 "claim_type": "exclusive_write", 00:25:40.725 "zoned": false, 00:25:40.725 "supported_io_types": { 00:25:40.725 "read": true, 00:25:40.725 "write": true, 00:25:40.725 "unmap": true, 00:25:40.725 "write_zeroes": true, 00:25:40.725 "flush": true, 00:25:40.725 "reset": true, 00:25:40.725 "compare": false, 00:25:40.725 "compare_and_write": false, 00:25:40.725 "abort": true, 00:25:40.725 "nvme_admin": false, 00:25:40.725 "nvme_io": false 00:25:40.725 }, 00:25:40.725 "memory_domains": [ 00:25:40.725 { 00:25:40.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.725 "dma_device_type": 2 00:25:40.725 } 00:25:40.725 ], 00:25:40.725 "driver_specific": {} 00:25:40.725 } 00:25:40.725 ] 00:25:40.983 12:45:23 -- common/autotest_common.sh@895 -- # return 0 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:40.983 "name": "Existed_Raid", 00:25:40.983 "uuid": "c451ab86-8ff2-4c98-aefb-9cf14b96e060", 00:25:40.983 "strip_size_kb": 64, 00:25:40.983 "state": "configuring", 00:25:40.983 "raid_level": "raid5f", 00:25:40.983 "superblock": true, 00:25:40.983 "num_base_bdevs": 3, 00:25:40.983 "num_base_bdevs_discovered": 1, 00:25:40.983 "num_base_bdevs_operational": 3, 00:25:40.983 "base_bdevs_list": [ 00:25:40.983 { 00:25:40.983 "name": "BaseBdev1", 00:25:40.983 "uuid": "be479915-da64-4712-a2fe-141a38fa65fe", 00:25:40.983 "is_configured": true, 00:25:40.983 "data_offset": 2048, 00:25:40.983 "data_size": 63488 00:25:40.983 }, 00:25:40.983 { 00:25:40.983 "name": "BaseBdev2", 00:25:40.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.983 "is_configured": false, 00:25:40.983 "data_offset": 0, 00:25:40.983 "data_size": 0 00:25:40.983 }, 00:25:40.983 { 00:25:40.983 "name": "BaseBdev3", 00:25:40.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.983 "is_configured": false, 00:25:40.983 "data_offset": 0, 00:25:40.983 "data_size": 0 00:25:40.983 } 00:25:40.983 ] 00:25:40.983 }' 00:25:40.983 12:45:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:40.983 12:45:23 -- common/autotest_common.sh@10 -- # set +x 00:25:41.550 12:45:23 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:41.808 [2024-10-01 12:45:24.135433] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:41.808 [2024-10-01 12:45:24.135615] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:41.808 12:45:24 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:25:41.808 12:45:24 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:42.067 12:45:24 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:42.326 BaseBdev1 00:25:42.326 12:45:24 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:25:42.326 12:45:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:42.326 12:45:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:42.326 12:45:24 -- common/autotest_common.sh@889 -- # local i 00:25:42.326 12:45:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:42.326 12:45:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:42.326 12:45:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:42.326 12:45:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:42.589 [ 00:25:42.589 { 00:25:42.589 "name": "BaseBdev1", 00:25:42.589 "aliases": [ 00:25:42.589 "40b23ea2-80bb-4e46-978a-7b7e6ebcfffc" 00:25:42.589 ], 00:25:42.589 "product_name": "Malloc disk", 00:25:42.589 "block_size": 512, 00:25:42.589 "num_blocks": 65536, 00:25:42.589 "uuid": "40b23ea2-80bb-4e46-978a-7b7e6ebcfffc", 00:25:42.589 "assigned_rate_limits": { 00:25:42.589 "rw_ios_per_sec": 0, 00:25:42.589 "rw_mbytes_per_sec": 0, 00:25:42.589 "r_mbytes_per_sec": 0, 00:25:42.589 "w_mbytes_per_sec": 0 00:25:42.589 }, 00:25:42.589 "claimed": false, 00:25:42.589 "zoned": false, 00:25:42.589 "supported_io_types": { 00:25:42.589 "read": true, 00:25:42.589 "write": true, 00:25:42.589 "unmap": true, 00:25:42.589 "write_zeroes": true, 00:25:42.589 "flush": true, 00:25:42.589 "reset": true, 00:25:42.589 "compare": false, 00:25:42.589 "compare_and_write": false, 00:25:42.589 "abort": true, 00:25:42.590 "nvme_admin": false, 00:25:42.590 "nvme_io": false 00:25:42.590 }, 00:25:42.590 "memory_domains": [ 00:25:42.590 { 00:25:42.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.590 "dma_device_type": 2 00:25:42.590 } 00:25:42.590 ], 00:25:42.590 "driver_specific": {} 00:25:42.590 } 00:25:42.590 ] 00:25:42.590 12:45:24 -- common/autotest_common.sh@895 -- # return 0 00:25:42.590 12:45:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:42.859 [2024-10-01 12:45:25.165560] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:42.859 [2024-10-01 12:45:25.167985] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:42.859 [2024-10-01 12:45:25.168163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:42.859 [2024-10-01 12:45:25.168248] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:42.859 [2024-10-01 12:45:25.168308] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:42.859 "name": "Existed_Raid", 00:25:42.859 "uuid": "d3b4a953-4948-42dd-977a-20866467cd6d", 00:25:42.859 "strip_size_kb": 64, 00:25:42.859 "state": "configuring", 00:25:42.859 "raid_level": "raid5f", 00:25:42.859 "superblock": true, 00:25:42.859 "num_base_bdevs": 3, 00:25:42.859 "num_base_bdevs_discovered": 1, 00:25:42.859 "num_base_bdevs_operational": 3, 00:25:42.859 "base_bdevs_list": [ 00:25:42.859 { 00:25:42.859 "name": "BaseBdev1", 00:25:42.859 "uuid": "40b23ea2-80bb-4e46-978a-7b7e6ebcfffc", 00:25:42.859 "is_configured": true, 00:25:42.859 "data_offset": 2048, 00:25:42.859 "data_size": 63488 00:25:42.859 }, 00:25:42.859 { 00:25:42.859 "name": "BaseBdev2", 00:25:42.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.859 "is_configured": false, 00:25:42.859 "data_offset": 0, 00:25:42.859 "data_size": 0 00:25:42.859 }, 00:25:42.859 { 00:25:42.859 "name": "BaseBdev3", 00:25:42.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.859 "is_configured": false, 00:25:42.859 "data_offset": 0, 00:25:42.859 "data_size": 0 00:25:42.859 } 00:25:42.859 ] 00:25:42.859 }' 00:25:42.859 12:45:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:42.859 12:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:43.428 12:45:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:43.686 [2024-10-01 12:45:26.111571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:43.686 BaseBdev2 00:25:43.686 12:45:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:43.686 12:45:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:25:43.686 12:45:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:43.686 12:45:26 -- common/autotest_common.sh@889 -- # local i 00:25:43.686 12:45:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:43.686 12:45:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:43.686 12:45:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:43.946 12:45:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:43.946 [ 00:25:43.946 { 00:25:43.946 "name": "BaseBdev2", 00:25:43.946 "aliases": [ 00:25:43.946 "79048df7-54d2-403d-b80e-190afe3d5c2b" 00:25:43.946 ], 00:25:43.946 "product_name": "Malloc disk", 00:25:43.946 "block_size": 512, 00:25:43.946 "num_blocks": 65536, 00:25:43.946 "uuid": "79048df7-54d2-403d-b80e-190afe3d5c2b", 00:25:43.946 "assigned_rate_limits": { 00:25:43.946 "rw_ios_per_sec": 0, 00:25:43.946 "rw_mbytes_per_sec": 0, 00:25:43.946 "r_mbytes_per_sec": 0, 00:25:43.946 "w_mbytes_per_sec": 0 00:25:43.946 }, 00:25:43.946 "claimed": true, 00:25:43.946 "claim_type": "exclusive_write", 00:25:43.946 "zoned": false, 00:25:43.946 "supported_io_types": { 00:25:43.946 "read": true, 00:25:43.946 "write": true, 00:25:43.946 "unmap": true, 00:25:43.946 "write_zeroes": true, 00:25:43.946 "flush": true, 00:25:43.946 "reset": true, 00:25:43.946 "compare": false, 00:25:43.946 "compare_and_write": false, 00:25:43.946 "abort": true, 00:25:43.946 "nvme_admin": false, 00:25:43.946 "nvme_io": false 00:25:43.946 }, 00:25:43.946 "memory_domains": [ 00:25:43.946 { 00:25:43.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.946 "dma_device_type": 2 00:25:43.946 } 00:25:43.946 ], 00:25:43.946 "driver_specific": {} 00:25:43.946 } 00:25:43.946 ] 00:25:44.206 12:45:26 -- common/autotest_common.sh@895 -- # return 0 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:44.206 "name": "Existed_Raid", 00:25:44.206 "uuid": "d3b4a953-4948-42dd-977a-20866467cd6d", 00:25:44.206 "strip_size_kb": 64, 00:25:44.206 "state": "configuring", 00:25:44.206 "raid_level": "raid5f", 00:25:44.206 "superblock": true, 00:25:44.206 "num_base_bdevs": 3, 00:25:44.206 "num_base_bdevs_discovered": 2, 00:25:44.206 "num_base_bdevs_operational": 3, 00:25:44.206 "base_bdevs_list": [ 00:25:44.206 { 00:25:44.206 "name": "BaseBdev1", 00:25:44.206 "uuid": "40b23ea2-80bb-4e46-978a-7b7e6ebcfffc", 00:25:44.206 "is_configured": true, 00:25:44.206 "data_offset": 2048, 00:25:44.206 "data_size": 63488 00:25:44.206 }, 00:25:44.206 { 00:25:44.206 "name": "BaseBdev2", 00:25:44.206 "uuid": "79048df7-54d2-403d-b80e-190afe3d5c2b", 00:25:44.206 "is_configured": true, 00:25:44.206 "data_offset": 2048, 00:25:44.206 "data_size": 63488 00:25:44.206 }, 00:25:44.206 { 00:25:44.206 "name": "BaseBdev3", 00:25:44.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.206 "is_configured": false, 00:25:44.206 "data_offset": 0, 00:25:44.206 "data_size": 0 00:25:44.206 } 00:25:44.206 ] 00:25:44.206 }' 00:25:44.206 12:45:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:44.206 12:45:26 -- common/autotest_common.sh@10 -- # set +x 00:25:44.775 12:45:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:45.035 [2024-10-01 12:45:27.412842] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:45.035 [2024-10-01 12:45:27.413339] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:25:45.035 BaseBdev3 00:25:45.035 [2024-10-01 12:45:27.414447] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:45.035 [2024-10-01 12:45:27.414655] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:25:45.035 [2024-10-01 12:45:27.418750] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:25:45.035 [2024-10-01 12:45:27.418870] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:25:45.035 [2024-10-01 12:45:27.419211] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.035 12:45:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:45.035 12:45:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:25:45.035 12:45:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:45.035 12:45:27 -- common/autotest_common.sh@889 -- # local i 00:25:45.035 12:45:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:45.035 12:45:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:45.035 12:45:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:45.296 12:45:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:45.296 [ 00:25:45.296 { 00:25:45.296 "name": "BaseBdev3", 00:25:45.296 "aliases": [ 00:25:45.296 "dfbfeb80-63e5-47ad-ab67-fd12329a04f2" 00:25:45.296 ], 00:25:45.296 "product_name": "Malloc disk", 00:25:45.296 "block_size": 512, 00:25:45.296 "num_blocks": 65536, 00:25:45.296 "uuid": "dfbfeb80-63e5-47ad-ab67-fd12329a04f2", 00:25:45.296 "assigned_rate_limits": { 00:25:45.296 "rw_ios_per_sec": 0, 00:25:45.296 "rw_mbytes_per_sec": 0, 00:25:45.296 "r_mbytes_per_sec": 0, 00:25:45.296 "w_mbytes_per_sec": 0 00:25:45.296 }, 00:25:45.296 "claimed": true, 00:25:45.296 "claim_type": "exclusive_write", 00:25:45.296 "zoned": false, 00:25:45.296 "supported_io_types": { 00:25:45.296 "read": true, 00:25:45.296 "write": true, 00:25:45.296 "unmap": true, 00:25:45.296 "write_zeroes": true, 00:25:45.296 "flush": true, 00:25:45.296 "reset": true, 00:25:45.296 "compare": false, 00:25:45.296 "compare_and_write": false, 00:25:45.296 "abort": true, 00:25:45.296 "nvme_admin": false, 00:25:45.296 "nvme_io": false 00:25:45.296 }, 00:25:45.296 "memory_domains": [ 00:25:45.296 { 00:25:45.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.296 "dma_device_type": 2 00:25:45.296 } 00:25:45.296 ], 00:25:45.296 "driver_specific": {} 00:25:45.296 } 00:25:45.296 ] 00:25:45.296 12:45:27 -- common/autotest_common.sh@895 -- # return 0 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.296 12:45:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:45.556 12:45:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:45.556 "name": "Existed_Raid", 00:25:45.556 "uuid": "d3b4a953-4948-42dd-977a-20866467cd6d", 00:25:45.556 "strip_size_kb": 64, 00:25:45.556 "state": "online", 00:25:45.556 "raid_level": "raid5f", 00:25:45.556 "superblock": true, 00:25:45.556 "num_base_bdevs": 3, 00:25:45.556 "num_base_bdevs_discovered": 3, 00:25:45.556 "num_base_bdevs_operational": 3, 00:25:45.556 "base_bdevs_list": [ 00:25:45.556 { 00:25:45.556 "name": "BaseBdev1", 00:25:45.556 "uuid": "40b23ea2-80bb-4e46-978a-7b7e6ebcfffc", 00:25:45.556 "is_configured": true, 00:25:45.556 "data_offset": 2048, 00:25:45.556 "data_size": 63488 00:25:45.556 }, 00:25:45.556 { 00:25:45.556 "name": "BaseBdev2", 00:25:45.556 "uuid": "79048df7-54d2-403d-b80e-190afe3d5c2b", 00:25:45.556 "is_configured": true, 00:25:45.556 "data_offset": 2048, 00:25:45.556 "data_size": 63488 00:25:45.556 }, 00:25:45.556 { 00:25:45.556 "name": "BaseBdev3", 00:25:45.556 "uuid": "dfbfeb80-63e5-47ad-ab67-fd12329a04f2", 00:25:45.556 "is_configured": true, 00:25:45.556 "data_offset": 2048, 00:25:45.556 "data_size": 63488 00:25:45.556 } 00:25:45.556 ] 00:25:45.556 }' 00:25:45.556 12:45:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:45.556 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:25:46.124 12:45:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:46.383 [2024-10-01 12:45:28.711535] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.383 12:45:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.650 12:45:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:46.650 "name": "Existed_Raid", 00:25:46.650 "uuid": "d3b4a953-4948-42dd-977a-20866467cd6d", 00:25:46.650 "strip_size_kb": 64, 00:25:46.650 "state": "online", 00:25:46.650 "raid_level": "raid5f", 00:25:46.650 "superblock": true, 00:25:46.650 "num_base_bdevs": 3, 00:25:46.650 "num_base_bdevs_discovered": 2, 00:25:46.650 "num_base_bdevs_operational": 2, 00:25:46.650 "base_bdevs_list": [ 00:25:46.650 { 00:25:46.650 "name": null, 00:25:46.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.650 "is_configured": false, 00:25:46.650 "data_offset": 2048, 00:25:46.650 "data_size": 63488 00:25:46.650 }, 00:25:46.650 { 00:25:46.650 "name": "BaseBdev2", 00:25:46.650 "uuid": "79048df7-54d2-403d-b80e-190afe3d5c2b", 00:25:46.650 "is_configured": true, 00:25:46.651 "data_offset": 2048, 00:25:46.651 "data_size": 63488 00:25:46.651 }, 00:25:46.651 { 00:25:46.651 "name": "BaseBdev3", 00:25:46.651 "uuid": "dfbfeb80-63e5-47ad-ab67-fd12329a04f2", 00:25:46.651 "is_configured": true, 00:25:46.651 "data_offset": 2048, 00:25:46.651 "data_size": 63488 00:25:46.651 } 00:25:46.651 ] 00:25:46.651 }' 00:25:46.651 12:45:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:46.651 12:45:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.217 12:45:29 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:47.217 12:45:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:47.217 12:45:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.217 12:45:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:47.475 12:45:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:47.475 12:45:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:47.475 12:45:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:47.475 [2024-10-01 12:45:29.956769] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:47.475 [2024-10-01 12:45:29.956947] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:47.475 [2024-10-01 12:45:29.957177] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:47.733 12:45:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:47.733 12:45:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:47.733 12:45:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.733 12:45:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:47.992 12:45:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:47.992 12:45:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:47.992 12:45:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:47.992 [2024-10-01 12:45:30.458999] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:47.992 [2024-10-01 12:45:30.459243] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:25:48.250 12:45:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:48.250 12:45:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:48.250 12:45:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:48.250 12:45:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.250 12:45:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:48.250 12:45:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:48.250 12:45:30 -- bdev/bdev_raid.sh@287 -- # killprocess 127587 00:25:48.250 12:45:30 -- common/autotest_common.sh@926 -- # '[' -z 127587 ']' 00:25:48.250 12:45:30 -- common/autotest_common.sh@930 -- # kill -0 127587 00:25:48.250 12:45:30 -- common/autotest_common.sh@931 -- # uname 00:25:48.250 12:45:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:48.250 12:45:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127587 00:25:48.509 12:45:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:48.509 12:45:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:48.509 12:45:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127587' 00:25:48.509 killing process with pid 127587 00:25:48.509 12:45:30 -- common/autotest_common.sh@945 -- # kill 127587 00:25:48.509 [2024-10-01 12:45:30.788527] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:48.509 12:45:30 -- common/autotest_common.sh@950 -- # wait 127587 00:25:48.509 [2024-10-01 12:45:30.788772] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:49.888 ************************************ 00:25:49.888 END TEST raid5f_state_function_test_sb 00:25:49.888 ************************************ 00:25:49.888 12:45:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:49.888 00:25:49.888 real 0m11.460s 00:25:49.888 user 0m19.187s 00:25:49.888 sys 0m1.868s 00:25:49.888 12:45:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.888 12:45:31 -- common/autotest_common.sh@10 -- # set +x 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:25:49.888 12:45:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:49.888 12:45:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:49.888 12:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:49.888 ************************************ 00:25:49.888 START TEST raid5f_superblock_test 00:25:49.888 ************************************ 00:25:49.888 12:45:32 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@357 -- # raid_pid=127956 00:25:49.888 12:45:32 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127956 /var/tmp/spdk-raid.sock 00:25:49.888 12:45:32 -- common/autotest_common.sh@819 -- # '[' -z 127956 ']' 00:25:49.888 12:45:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:49.888 12:45:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:49.888 12:45:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:49.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:49.888 12:45:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:49.888 12:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:49.888 [2024-10-01 12:45:32.143943] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:49.888 [2024-10-01 12:45:32.144265] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127956 ] 00:25:49.888 [2024-10-01 12:45:32.312518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.147 [2024-10-01 12:45:32.495003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.147 [2024-10-01 12:45:32.672049] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:50.715 12:45:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:50.715 12:45:32 -- common/autotest_common.sh@852 -- # return 0 00:25:50.715 12:45:32 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:25:50.715 12:45:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:50.715 12:45:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:25:50.715 12:45:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:25:50.715 12:45:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:50.715 12:45:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:50.715 12:45:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:50.715 12:45:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:50.715 12:45:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:50.715 malloc1 00:25:50.715 12:45:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:50.973 [2024-10-01 12:45:33.339340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:50.973 [2024-10-01 12:45:33.339562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:50.973 [2024-10-01 12:45:33.339626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:50.973 [2024-10-01 12:45:33.339741] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:50.973 [2024-10-01 12:45:33.342195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:50.973 [2024-10-01 12:45:33.342356] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:50.973 pt1 00:25:50.973 12:45:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:50.973 12:45:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:50.973 12:45:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:25:50.973 12:45:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:25:50.973 12:45:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:50.973 12:45:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:50.973 12:45:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:50.973 12:45:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:50.973 12:45:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:51.231 malloc2 00:25:51.231 12:45:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:51.231 [2024-10-01 12:45:33.758681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:51.231 [2024-10-01 12:45:33.758864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.231 [2024-10-01 12:45:33.758939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:51.231 [2024-10-01 12:45:33.759097] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.231 [2024-10-01 12:45:33.761562] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.231 [2024-10-01 12:45:33.761717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:51.231 pt2 00:25:51.490 12:45:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:51.490 12:45:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:51.490 12:45:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:25:51.490 12:45:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:25:51.490 12:45:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:51.490 12:45:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:51.490 12:45:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:51.490 12:45:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:51.490 12:45:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:51.490 malloc3 00:25:51.490 12:45:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:51.749 [2024-10-01 12:45:34.154207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:51.749 [2024-10-01 12:45:34.154355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.749 [2024-10-01 12:45:34.154453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:51.749 [2024-10-01 12:45:34.154651] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.749 [2024-10-01 12:45:34.157026] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.749 [2024-10-01 12:45:34.157181] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:51.749 pt3 00:25:51.749 12:45:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:51.749 12:45:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:51.749 12:45:34 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:25:52.008 [2024-10-01 12:45:34.337972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:52.008 [2024-10-01 12:45:34.340080] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:52.008 [2024-10-01 12:45:34.340229] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:52.008 [2024-10-01 12:45:34.340425] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:25:52.008 [2024-10-01 12:45:34.340666] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:52.008 [2024-10-01 12:45:34.340783] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:25:52.008 [2024-10-01 12:45:34.344714] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:25:52.008 [2024-10-01 12:45:34.344823] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:25:52.008 [2024-10-01 12:45:34.345036] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.008 12:45:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.266 12:45:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:52.266 "name": "raid_bdev1", 00:25:52.266 "uuid": "71cee3f8-91ef-4f2f-adb7-afc6a80a00c7", 00:25:52.266 "strip_size_kb": 64, 00:25:52.266 "state": "online", 00:25:52.266 "raid_level": "raid5f", 00:25:52.266 "superblock": true, 00:25:52.266 "num_base_bdevs": 3, 00:25:52.266 "num_base_bdevs_discovered": 3, 00:25:52.266 "num_base_bdevs_operational": 3, 00:25:52.266 "base_bdevs_list": [ 00:25:52.266 { 00:25:52.266 "name": "pt1", 00:25:52.266 "uuid": "e78ed836-35ce-5f44-b2bc-bd375f1e8916", 00:25:52.266 "is_configured": true, 00:25:52.266 "data_offset": 2048, 00:25:52.266 "data_size": 63488 00:25:52.266 }, 00:25:52.266 { 00:25:52.266 "name": "pt2", 00:25:52.266 "uuid": "38096adb-0119-560a-9a12-be45c2c74700", 00:25:52.266 "is_configured": true, 00:25:52.266 "data_offset": 2048, 00:25:52.266 "data_size": 63488 00:25:52.266 }, 00:25:52.266 { 00:25:52.266 "name": "pt3", 00:25:52.266 "uuid": "d0de35f9-c9b8-5381-a68d-9784a1475b0b", 00:25:52.266 "is_configured": true, 00:25:52.266 "data_offset": 2048, 00:25:52.266 "data_size": 63488 00:25:52.266 } 00:25:52.266 ] 00:25:52.266 }' 00:25:52.266 12:45:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:52.266 12:45:34 -- common/autotest_common.sh@10 -- # set +x 00:25:52.525 12:45:35 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:52.525 12:45:35 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:25:52.784 [2024-10-01 12:45:35.218187] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:52.784 12:45:35 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=71cee3f8-91ef-4f2f-adb7-afc6a80a00c7 00:25:52.784 12:45:35 -- bdev/bdev_raid.sh@380 -- # '[' -z 71cee3f8-91ef-4f2f-adb7-afc6a80a00c7 ']' 00:25:52.784 12:45:35 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:53.043 [2024-10-01 12:45:35.393816] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:53.043 [2024-10-01 12:45:35.393930] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:53.043 [2024-10-01 12:45:35.394074] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:53.043 [2024-10-01 12:45:35.394164] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:53.043 [2024-10-01 12:45:35.394193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:25:53.043 12:45:35 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.043 12:45:35 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:25:53.301 12:45:35 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:25:53.301 12:45:35 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:25:53.301 12:45:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:53.301 12:45:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:53.301 12:45:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:53.301 12:45:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:53.559 12:45:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:53.559 12:45:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:53.821 12:45:36 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:53.821 12:45:36 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:53.821 12:45:36 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:25:53.821 12:45:36 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:25:53.821 12:45:36 -- common/autotest_common.sh@640 -- # local es=0 00:25:53.821 12:45:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:25:53.821 12:45:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.821 12:45:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:53.821 12:45:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.821 12:45:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:53.821 12:45:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.821 12:45:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:53.821 12:45:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.821 12:45:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:53.821 12:45:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:25:54.080 [2024-10-01 12:45:36.468235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:54.080 [2024-10-01 12:45:36.470318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:54.080 [2024-10-01 12:45:36.470456] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:54.080 [2024-10-01 12:45:36.470520] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:25:54.080 [2024-10-01 12:45:36.470682] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:25:54.080 [2024-10-01 12:45:36.470737] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:25:54.080 [2024-10-01 12:45:36.470918] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:54.080 [2024-10-01 12:45:36.470987] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:25:54.080 request: 00:25:54.080 { 00:25:54.080 "name": "raid_bdev1", 00:25:54.080 "raid_level": "raid5f", 00:25:54.080 "base_bdevs": [ 00:25:54.080 "malloc1", 00:25:54.080 "malloc2", 00:25:54.080 "malloc3" 00:25:54.080 ], 00:25:54.080 "superblock": false, 00:25:54.080 "strip_size_kb": 64, 00:25:54.080 "method": "bdev_raid_create", 00:25:54.080 "req_id": 1 00:25:54.080 } 00:25:54.080 Got JSON-RPC error response 00:25:54.080 response: 00:25:54.080 { 00:25:54.080 "code": -17, 00:25:54.080 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:54.080 } 00:25:54.080 12:45:36 -- common/autotest_common.sh@643 -- # es=1 00:25:54.080 12:45:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:54.080 12:45:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:54.080 12:45:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:54.080 12:45:36 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.080 12:45:36 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:54.340 [2024-10-01 12:45:36.827721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:54.340 [2024-10-01 12:45:36.827863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.340 [2024-10-01 12:45:36.827936] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:54.340 [2024-10-01 12:45:36.828011] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.340 [2024-10-01 12:45:36.830351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.340 [2024-10-01 12:45:36.830489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:54.340 [2024-10-01 12:45:36.830651] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:54.340 [2024-10-01 12:45:36.830712] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:54.340 pt1 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.340 12:45:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.599 12:45:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:54.600 "name": "raid_bdev1", 00:25:54.600 "uuid": "71cee3f8-91ef-4f2f-adb7-afc6a80a00c7", 00:25:54.600 "strip_size_kb": 64, 00:25:54.600 "state": "configuring", 00:25:54.600 "raid_level": "raid5f", 00:25:54.600 "superblock": true, 00:25:54.600 "num_base_bdevs": 3, 00:25:54.600 "num_base_bdevs_discovered": 1, 00:25:54.600 "num_base_bdevs_operational": 3, 00:25:54.600 "base_bdevs_list": [ 00:25:54.600 { 00:25:54.600 "name": "pt1", 00:25:54.600 "uuid": "e78ed836-35ce-5f44-b2bc-bd375f1e8916", 00:25:54.600 "is_configured": true, 00:25:54.600 "data_offset": 2048, 00:25:54.600 "data_size": 63488 00:25:54.600 }, 00:25:54.600 { 00:25:54.600 "name": null, 00:25:54.600 "uuid": "38096adb-0119-560a-9a12-be45c2c74700", 00:25:54.600 "is_configured": false, 00:25:54.600 "data_offset": 2048, 00:25:54.600 "data_size": 63488 00:25:54.600 }, 00:25:54.600 { 00:25:54.600 "name": null, 00:25:54.600 "uuid": "d0de35f9-c9b8-5381-a68d-9784a1475b0b", 00:25:54.600 "is_configured": false, 00:25:54.600 "data_offset": 2048, 00:25:54.600 "data_size": 63488 00:25:54.600 } 00:25:54.600 ] 00:25:54.600 }' 00:25:54.600 12:45:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:54.600 12:45:37 -- common/autotest_common.sh@10 -- # set +x 00:25:55.167 12:45:37 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:25:55.167 12:45:37 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:55.167 [2024-10-01 12:45:37.690519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:55.167 [2024-10-01 12:45:37.690689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:55.167 [2024-10-01 12:45:37.690759] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:55.167 [2024-10-01 12:45:37.690849] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:55.167 [2024-10-01 12:45:37.691214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:55.167 [2024-10-01 12:45:37.691344] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:55.167 [2024-10-01 12:45:37.691486] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:55.167 [2024-10-01 12:45:37.691600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:55.167 pt2 00:25:55.427 12:45:37 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:55.427 [2024-10-01 12:45:37.874331] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:55.427 12:45:37 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.428 12:45:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.692 12:45:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:55.693 "name": "raid_bdev1", 00:25:55.693 "uuid": "71cee3f8-91ef-4f2f-adb7-afc6a80a00c7", 00:25:55.693 "strip_size_kb": 64, 00:25:55.693 "state": "configuring", 00:25:55.693 "raid_level": "raid5f", 00:25:55.693 "superblock": true, 00:25:55.693 "num_base_bdevs": 3, 00:25:55.693 "num_base_bdevs_discovered": 1, 00:25:55.693 "num_base_bdevs_operational": 3, 00:25:55.693 "base_bdevs_list": [ 00:25:55.693 { 00:25:55.693 "name": "pt1", 00:25:55.693 "uuid": "e78ed836-35ce-5f44-b2bc-bd375f1e8916", 00:25:55.693 "is_configured": true, 00:25:55.693 "data_offset": 2048, 00:25:55.693 "data_size": 63488 00:25:55.693 }, 00:25:55.693 { 00:25:55.693 "name": null, 00:25:55.693 "uuid": "38096adb-0119-560a-9a12-be45c2c74700", 00:25:55.693 "is_configured": false, 00:25:55.693 "data_offset": 2048, 00:25:55.693 "data_size": 63488 00:25:55.693 }, 00:25:55.693 { 00:25:55.693 "name": null, 00:25:55.693 "uuid": "d0de35f9-c9b8-5381-a68d-9784a1475b0b", 00:25:55.693 "is_configured": false, 00:25:55.693 "data_offset": 2048, 00:25:55.693 "data_size": 63488 00:25:55.693 } 00:25:55.693 ] 00:25:55.693 }' 00:25:55.693 12:45:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:55.693 12:45:38 -- common/autotest_common.sh@10 -- # set +x 00:25:56.305 12:45:38 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:25:56.305 12:45:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:56.305 12:45:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:56.305 [2024-10-01 12:45:38.741030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:56.305 [2024-10-01 12:45:38.741199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.305 [2024-10-01 12:45:38.741258] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:56.305 [2024-10-01 12:45:38.741350] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.305 [2024-10-01 12:45:38.741725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.305 [2024-10-01 12:45:38.741848] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:56.305 [2024-10-01 12:45:38.742043] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:56.305 [2024-10-01 12:45:38.742138] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:56.305 pt2 00:25:56.305 12:45:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:56.305 12:45:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:56.305 12:45:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:56.564 [2024-10-01 12:45:38.924774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:56.564 [2024-10-01 12:45:38.924933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.564 [2024-10-01 12:45:38.924988] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:56.564 [2024-10-01 12:45:38.925074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.564 [2024-10-01 12:45:38.925429] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.564 [2024-10-01 12:45:38.925540] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:56.564 [2024-10-01 12:45:38.925677] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:56.564 [2024-10-01 12:45:38.925773] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:56.564 [2024-10-01 12:45:38.925898] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:25:56.564 [2024-10-01 12:45:38.925929] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:56.564 [2024-10-01 12:45:38.926030] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:56.564 [2024-10-01 12:45:38.929542] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:25:56.564 [2024-10-01 12:45:38.929646] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:25:56.564 [2024-10-01 12:45:38.929882] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:56.564 pt3 00:25:56.564 12:45:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:56.564 12:45:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:56.564 12:45:38 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:56.564 12:45:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:56.564 12:45:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:56.565 12:45:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:56.565 12:45:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:56.565 12:45:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:56.565 12:45:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:56.565 12:45:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:56.565 12:45:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:56.565 12:45:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:56.565 12:45:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.565 12:45:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.823 12:45:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:56.823 "name": "raid_bdev1", 00:25:56.823 "uuid": "71cee3f8-91ef-4f2f-adb7-afc6a80a00c7", 00:25:56.823 "strip_size_kb": 64, 00:25:56.824 "state": "online", 00:25:56.824 "raid_level": "raid5f", 00:25:56.824 "superblock": true, 00:25:56.824 "num_base_bdevs": 3, 00:25:56.824 "num_base_bdevs_discovered": 3, 00:25:56.824 "num_base_bdevs_operational": 3, 00:25:56.824 "base_bdevs_list": [ 00:25:56.824 { 00:25:56.824 "name": "pt1", 00:25:56.824 "uuid": "e78ed836-35ce-5f44-b2bc-bd375f1e8916", 00:25:56.824 "is_configured": true, 00:25:56.824 "data_offset": 2048, 00:25:56.824 "data_size": 63488 00:25:56.824 }, 00:25:56.824 { 00:25:56.824 "name": "pt2", 00:25:56.824 "uuid": "38096adb-0119-560a-9a12-be45c2c74700", 00:25:56.824 "is_configured": true, 00:25:56.824 "data_offset": 2048, 00:25:56.824 "data_size": 63488 00:25:56.824 }, 00:25:56.824 { 00:25:56.824 "name": "pt3", 00:25:56.824 "uuid": "d0de35f9-c9b8-5381-a68d-9784a1475b0b", 00:25:56.824 "is_configured": true, 00:25:56.824 "data_offset": 2048, 00:25:56.824 "data_size": 63488 00:25:56.824 } 00:25:56.824 ] 00:25:56.824 }' 00:25:56.824 12:45:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:56.824 12:45:39 -- common/autotest_common.sh@10 -- # set +x 00:25:57.390 12:45:39 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:57.390 12:45:39 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:25:57.390 [2024-10-01 12:45:39.810034] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:57.390 12:45:39 -- bdev/bdev_raid.sh@430 -- # '[' 71cee3f8-91ef-4f2f-adb7-afc6a80a00c7 '!=' 71cee3f8-91ef-4f2f-adb7-afc6a80a00c7 ']' 00:25:57.390 12:45:39 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:25:57.390 12:45:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:57.390 12:45:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:57.390 12:45:39 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:57.648 [2024-10-01 12:45:40.001643] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.648 12:45:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.907 12:45:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:57.907 "name": "raid_bdev1", 00:25:57.907 "uuid": "71cee3f8-91ef-4f2f-adb7-afc6a80a00c7", 00:25:57.907 "strip_size_kb": 64, 00:25:57.907 "state": "online", 00:25:57.907 "raid_level": "raid5f", 00:25:57.907 "superblock": true, 00:25:57.907 "num_base_bdevs": 3, 00:25:57.907 "num_base_bdevs_discovered": 2, 00:25:57.907 "num_base_bdevs_operational": 2, 00:25:57.907 "base_bdevs_list": [ 00:25:57.907 { 00:25:57.907 "name": null, 00:25:57.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.907 "is_configured": false, 00:25:57.907 "data_offset": 2048, 00:25:57.907 "data_size": 63488 00:25:57.907 }, 00:25:57.907 { 00:25:57.907 "name": "pt2", 00:25:57.907 "uuid": "38096adb-0119-560a-9a12-be45c2c74700", 00:25:57.907 "is_configured": true, 00:25:57.907 "data_offset": 2048, 00:25:57.907 "data_size": 63488 00:25:57.907 }, 00:25:57.907 { 00:25:57.907 "name": "pt3", 00:25:57.907 "uuid": "d0de35f9-c9b8-5381-a68d-9784a1475b0b", 00:25:57.907 "is_configured": true, 00:25:57.907 "data_offset": 2048, 00:25:57.907 "data_size": 63488 00:25:57.907 } 00:25:57.907 ] 00:25:57.907 }' 00:25:57.907 12:45:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:57.907 12:45:40 -- common/autotest_common.sh@10 -- # set +x 00:25:58.473 12:45:40 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:58.473 [2024-10-01 12:45:40.912291] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:58.473 [2024-10-01 12:45:40.912411] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:58.473 [2024-10-01 12:45:40.912583] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:58.473 [2024-10-01 12:45:40.912655] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:58.473 [2024-10-01 12:45:40.912871] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:25:58.473 12:45:40 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:25:58.474 12:45:40 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.731 12:45:41 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:25:58.731 12:45:41 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:25:58.731 12:45:41 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:25:58.731 12:45:41 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:58.731 12:45:41 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:58.989 12:45:41 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:58.989 12:45:41 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:58.989 12:45:41 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:58.989 12:45:41 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:58.989 12:45:41 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:58.989 12:45:41 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:25:58.989 12:45:41 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:58.989 12:45:41 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:59.247 [2024-10-01 12:45:41.639221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:59.247 [2024-10-01 12:45:41.639413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.247 [2024-10-01 12:45:41.639475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:59.247 [2024-10-01 12:45:41.639567] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.247 [2024-10-01 12:45:41.641965] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.247 [2024-10-01 12:45:41.642097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:59.247 [2024-10-01 12:45:41.642287] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:59.247 [2024-10-01 12:45:41.642359] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:59.247 pt2 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.247 12:45:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.506 12:45:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:59.506 "name": "raid_bdev1", 00:25:59.506 "uuid": "71cee3f8-91ef-4f2f-adb7-afc6a80a00c7", 00:25:59.506 "strip_size_kb": 64, 00:25:59.506 "state": "configuring", 00:25:59.506 "raid_level": "raid5f", 00:25:59.506 "superblock": true, 00:25:59.506 "num_base_bdevs": 3, 00:25:59.506 "num_base_bdevs_discovered": 1, 00:25:59.506 "num_base_bdevs_operational": 2, 00:25:59.506 "base_bdevs_list": [ 00:25:59.506 { 00:25:59.506 "name": null, 00:25:59.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.506 "is_configured": false, 00:25:59.506 "data_offset": 2048, 00:25:59.506 "data_size": 63488 00:25:59.506 }, 00:25:59.506 { 00:25:59.506 "name": "pt2", 00:25:59.506 "uuid": "38096adb-0119-560a-9a12-be45c2c74700", 00:25:59.506 "is_configured": true, 00:25:59.506 "data_offset": 2048, 00:25:59.506 "data_size": 63488 00:25:59.506 }, 00:25:59.506 { 00:25:59.506 "name": null, 00:25:59.506 "uuid": "d0de35f9-c9b8-5381-a68d-9784a1475b0b", 00:25:59.506 "is_configured": false, 00:25:59.506 "data_offset": 2048, 00:25:59.506 "data_size": 63488 00:25:59.506 } 00:25:59.506 ] 00:25:59.506 }' 00:25:59.506 12:45:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:59.506 12:45:41 -- common/autotest_common.sh@10 -- # set +x 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@462 -- # i=2 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:00.073 [2024-10-01 12:45:42.554539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:00.073 [2024-10-01 12:45:42.554719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.073 [2024-10-01 12:45:42.554781] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:00.073 [2024-10-01 12:45:42.554915] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.073 [2024-10-01 12:45:42.555307] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.073 [2024-10-01 12:45:42.555425] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:00.073 [2024-10-01 12:45:42.555551] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:00.073 [2024-10-01 12:45:42.555711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:00.073 [2024-10-01 12:45:42.555824] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:26:00.073 [2024-10-01 12:45:42.555934] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:00.073 [2024-10-01 12:45:42.556088] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:00.073 [2024-10-01 12:45:42.559583] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:26:00.073 [2024-10-01 12:45:42.559700] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:26:00.073 [2024-10-01 12:45:42.560036] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.073 pt3 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:00.073 12:45:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:00.074 12:45:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:00.074 12:45:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.074 12:45:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.332 12:45:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:00.332 "name": "raid_bdev1", 00:26:00.332 "uuid": "71cee3f8-91ef-4f2f-adb7-afc6a80a00c7", 00:26:00.332 "strip_size_kb": 64, 00:26:00.332 "state": "online", 00:26:00.332 "raid_level": "raid5f", 00:26:00.332 "superblock": true, 00:26:00.332 "num_base_bdevs": 3, 00:26:00.332 "num_base_bdevs_discovered": 2, 00:26:00.332 "num_base_bdevs_operational": 2, 00:26:00.332 "base_bdevs_list": [ 00:26:00.332 { 00:26:00.332 "name": null, 00:26:00.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.332 "is_configured": false, 00:26:00.332 "data_offset": 2048, 00:26:00.332 "data_size": 63488 00:26:00.332 }, 00:26:00.332 { 00:26:00.332 "name": "pt2", 00:26:00.333 "uuid": "38096adb-0119-560a-9a12-be45c2c74700", 00:26:00.333 "is_configured": true, 00:26:00.333 "data_offset": 2048, 00:26:00.333 "data_size": 63488 00:26:00.333 }, 00:26:00.333 { 00:26:00.333 "name": "pt3", 00:26:00.333 "uuid": "d0de35f9-c9b8-5381-a68d-9784a1475b0b", 00:26:00.333 "is_configured": true, 00:26:00.333 "data_offset": 2048, 00:26:00.333 "data_size": 63488 00:26:00.333 } 00:26:00.333 ] 00:26:00.333 }' 00:26:00.333 12:45:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:00.333 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:26:00.900 12:45:43 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:26:00.900 12:45:43 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:01.159 [2024-10-01 12:45:43.465103] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:01.159 [2024-10-01 12:45:43.465230] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:01.159 [2024-10-01 12:45:43.465410] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:01.159 [2024-10-01 12:45:43.465484] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:01.159 [2024-10-01 12:45:43.465569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:26:01.159 12:45:43 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.159 12:45:43 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:26:01.159 12:45:43 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:26:01.159 12:45:43 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:26:01.159 12:45:43 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:01.418 [2024-10-01 12:45:43.816606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:01.418 [2024-10-01 12:45:43.816790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.418 [2024-10-01 12:45:43.816853] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:01.418 [2024-10-01 12:45:43.816977] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.418 [2024-10-01 12:45:43.819464] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.418 [2024-10-01 12:45:43.819633] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:01.418 [2024-10-01 12:45:43.819844] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:01.418 [2024-10-01 12:45:43.819936] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:01.418 pt1 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.418 12:45:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.677 12:45:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:01.677 "name": "raid_bdev1", 00:26:01.677 "uuid": "71cee3f8-91ef-4f2f-adb7-afc6a80a00c7", 00:26:01.677 "strip_size_kb": 64, 00:26:01.677 "state": "configuring", 00:26:01.677 "raid_level": "raid5f", 00:26:01.677 "superblock": true, 00:26:01.677 "num_base_bdevs": 3, 00:26:01.677 "num_base_bdevs_discovered": 1, 00:26:01.677 "num_base_bdevs_operational": 3, 00:26:01.677 "base_bdevs_list": [ 00:26:01.677 { 00:26:01.677 "name": "pt1", 00:26:01.677 "uuid": "e78ed836-35ce-5f44-b2bc-bd375f1e8916", 00:26:01.677 "is_configured": true, 00:26:01.677 "data_offset": 2048, 00:26:01.677 "data_size": 63488 00:26:01.677 }, 00:26:01.677 { 00:26:01.677 "name": null, 00:26:01.677 "uuid": "38096adb-0119-560a-9a12-be45c2c74700", 00:26:01.677 "is_configured": false, 00:26:01.677 "data_offset": 2048, 00:26:01.677 "data_size": 63488 00:26:01.677 }, 00:26:01.677 { 00:26:01.677 "name": null, 00:26:01.677 "uuid": "d0de35f9-c9b8-5381-a68d-9784a1475b0b", 00:26:01.677 "is_configured": false, 00:26:01.677 "data_offset": 2048, 00:26:01.677 "data_size": 63488 00:26:01.677 } 00:26:01.677 ] 00:26:01.677 }' 00:26:01.677 12:45:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:01.677 12:45:44 -- common/autotest_common.sh@10 -- # set +x 00:26:02.246 12:45:44 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:26:02.246 12:45:44 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:02.246 12:45:44 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:02.246 12:45:44 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:02.246 12:45:44 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:02.246 12:45:44 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:02.504 12:45:44 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:02.504 12:45:44 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:02.504 12:45:44 -- bdev/bdev_raid.sh@489 -- # i=2 00:26:02.504 12:45:44 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:02.762 [2024-10-01 12:45:45.090777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:02.762 [2024-10-01 12:45:45.091011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.762 [2024-10-01 12:45:45.091076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:02.762 [2024-10-01 12:45:45.091201] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.762 [2024-10-01 12:45:45.091646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.762 [2024-10-01 12:45:45.091783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:02.762 [2024-10-01 12:45:45.091969] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:02.762 [2024-10-01 12:45:45.092011] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:02.762 [2024-10-01 12:45:45.092093] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:02.762 [2024-10-01 12:45:45.092136] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:26:02.762 [2024-10-01 12:45:45.092221] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:02.762 pt3 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.762 12:45:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:02.762 "name": "raid_bdev1", 00:26:02.763 "uuid": "71cee3f8-91ef-4f2f-adb7-afc6a80a00c7", 00:26:02.763 "strip_size_kb": 64, 00:26:02.763 "state": "configuring", 00:26:02.763 "raid_level": "raid5f", 00:26:02.763 "superblock": true, 00:26:02.763 "num_base_bdevs": 3, 00:26:02.763 "num_base_bdevs_discovered": 1, 00:26:02.763 "num_base_bdevs_operational": 2, 00:26:02.763 "base_bdevs_list": [ 00:26:02.763 { 00:26:02.763 "name": null, 00:26:02.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.763 "is_configured": false, 00:26:02.763 "data_offset": 2048, 00:26:02.763 "data_size": 63488 00:26:02.763 }, 00:26:02.763 { 00:26:02.763 "name": null, 00:26:02.763 "uuid": "38096adb-0119-560a-9a12-be45c2c74700", 00:26:02.763 "is_configured": false, 00:26:02.763 "data_offset": 2048, 00:26:02.763 "data_size": 63488 00:26:02.763 }, 00:26:02.763 { 00:26:02.763 "name": "pt3", 00:26:02.763 "uuid": "d0de35f9-c9b8-5381-a68d-9784a1475b0b", 00:26:02.763 "is_configured": true, 00:26:02.763 "data_offset": 2048, 00:26:02.763 "data_size": 63488 00:26:02.763 } 00:26:02.763 ] 00:26:02.763 }' 00:26:02.763 12:45:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:02.763 12:45:45 -- common/autotest_common.sh@10 -- # set +x 00:26:03.330 12:45:45 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:26:03.330 12:45:45 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:03.330 12:45:45 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:03.589 [2024-10-01 12:45:45.986250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:03.589 [2024-10-01 12:45:45.986482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.589 [2024-10-01 12:45:45.986545] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:03.589 [2024-10-01 12:45:45.986645] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.589 [2024-10-01 12:45:45.987077] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.589 [2024-10-01 12:45:45.987212] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:03.589 [2024-10-01 12:45:45.987378] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:03.589 [2024-10-01 12:45:45.987452] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:03.589 [2024-10-01 12:45:45.987758] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:26:03.589 [2024-10-01 12:45:45.987851] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:03.589 [2024-10-01 12:45:45.987993] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:26:03.589 [2024-10-01 12:45:45.991833] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:26:03.589 [2024-10-01 12:45:45.991992] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:26:03.589 [2024-10-01 12:45:45.992268] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.589 pt2 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.589 12:45:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.848 12:45:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:03.848 "name": "raid_bdev1", 00:26:03.848 "uuid": "71cee3f8-91ef-4f2f-adb7-afc6a80a00c7", 00:26:03.848 "strip_size_kb": 64, 00:26:03.848 "state": "online", 00:26:03.848 "raid_level": "raid5f", 00:26:03.848 "superblock": true, 00:26:03.848 "num_base_bdevs": 3, 00:26:03.848 "num_base_bdevs_discovered": 2, 00:26:03.848 "num_base_bdevs_operational": 2, 00:26:03.848 "base_bdevs_list": [ 00:26:03.848 { 00:26:03.848 "name": null, 00:26:03.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.848 "is_configured": false, 00:26:03.848 "data_offset": 2048, 00:26:03.848 "data_size": 63488 00:26:03.848 }, 00:26:03.848 { 00:26:03.848 "name": "pt2", 00:26:03.848 "uuid": "38096adb-0119-560a-9a12-be45c2c74700", 00:26:03.848 "is_configured": true, 00:26:03.848 "data_offset": 2048, 00:26:03.848 "data_size": 63488 00:26:03.848 }, 00:26:03.848 { 00:26:03.848 "name": "pt3", 00:26:03.848 "uuid": "d0de35f9-c9b8-5381-a68d-9784a1475b0b", 00:26:03.848 "is_configured": true, 00:26:03.848 "data_offset": 2048, 00:26:03.848 "data_size": 63488 00:26:03.848 } 00:26:03.848 ] 00:26:03.848 }' 00:26:03.848 12:45:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:03.848 12:45:46 -- common/autotest_common.sh@10 -- # set +x 00:26:04.414 12:45:46 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:04.414 12:45:46 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:26:04.414 [2024-10-01 12:45:46.902618] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:04.414 12:45:46 -- bdev/bdev_raid.sh@506 -- # '[' 71cee3f8-91ef-4f2f-adb7-afc6a80a00c7 '!=' 71cee3f8-91ef-4f2f-adb7-afc6a80a00c7 ']' 00:26:04.414 12:45:46 -- bdev/bdev_raid.sh@511 -- # killprocess 127956 00:26:04.414 12:45:46 -- common/autotest_common.sh@926 -- # '[' -z 127956 ']' 00:26:04.414 12:45:46 -- common/autotest_common.sh@930 -- # kill -0 127956 00:26:04.414 12:45:46 -- common/autotest_common.sh@931 -- # uname 00:26:04.414 12:45:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:04.414 12:45:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127956 00:26:04.673 12:45:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:04.673 12:45:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:04.673 12:45:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127956' 00:26:04.673 killing process with pid 127956 00:26:04.673 12:45:46 -- common/autotest_common.sh@945 -- # kill 127956 00:26:04.673 [2024-10-01 12:45:46.959588] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:04.673 [2024-10-01 12:45:46.959776] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:04.673 [2024-10-01 12:45:46.959851] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:04.673 [2024-10-01 12:45:46.959943] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:26:04.674 12:45:46 -- common/autotest_common.sh@950 -- # wait 127956 00:26:04.933 [2024-10-01 12:45:47.213263] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:06.311 ************************************ 00:26:06.311 END TEST raid5f_superblock_test 00:26:06.311 ************************************ 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@513 -- # return 0 00:26:06.311 00:26:06.311 real 0m16.321s 00:26:06.311 user 0m28.715s 00:26:06.311 sys 0m2.612s 00:26:06.311 12:45:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.311 12:45:48 -- common/autotest_common.sh@10 -- # set +x 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:26:06.311 12:45:48 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:26:06.311 12:45:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.311 12:45:48 -- common/autotest_common.sh@10 -- # set +x 00:26:06.311 ************************************ 00:26:06.311 START TEST raid5f_rebuild_test 00:26:06.311 ************************************ 00:26:06.311 12:45:48 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@544 -- # raid_pid=128531 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:06.311 12:45:48 -- bdev/bdev_raid.sh@545 -- # waitforlisten 128531 /var/tmp/spdk-raid.sock 00:26:06.311 12:45:48 -- common/autotest_common.sh@819 -- # '[' -z 128531 ']' 00:26:06.311 12:45:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:06.311 12:45:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:06.311 12:45:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:06.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:06.311 12:45:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:06.311 12:45:48 -- common/autotest_common.sh@10 -- # set +x 00:26:06.311 [2024-10-01 12:45:48.570868] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:06.312 [2024-10-01 12:45:48.571166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128531 ] 00:26:06.312 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:06.312 Zero copy mechanism will not be used. 00:26:06.312 [2024-10-01 12:45:48.736280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.570 [2024-10-01 12:45:48.954655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.829 [2024-10-01 12:45:49.195344] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:07.087 12:45:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:07.087 12:45:49 -- common/autotest_common.sh@852 -- # return 0 00:26:07.087 12:45:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:07.087 12:45:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:07.087 12:45:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:07.087 BaseBdev1 00:26:07.087 12:45:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:07.087 12:45:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:07.087 12:45:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:07.345 BaseBdev2 00:26:07.604 12:45:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:07.604 12:45:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:07.604 12:45:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:07.604 BaseBdev3 00:26:07.862 12:45:50 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:07.862 spare_malloc 00:26:07.862 12:45:50 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:08.121 spare_delay 00:26:08.121 12:45:50 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:08.379 [2024-10-01 12:45:50.728881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:08.379 [2024-10-01 12:45:50.729154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.379 [2024-10-01 12:45:50.729218] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:26:08.379 [2024-10-01 12:45:50.729347] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.379 [2024-10-01 12:45:50.731830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.379 [2024-10-01 12:45:50.732012] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:08.379 spare 00:26:08.379 12:45:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:26:08.637 [2024-10-01 12:45:50.912673] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:08.637 [2024-10-01 12:45:50.914881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:08.637 [2024-10-01 12:45:50.915043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:08.637 [2024-10-01 12:45:50.915138] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:26:08.637 [2024-10-01 12:45:50.915223] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:08.637 [2024-10-01 12:45:50.915387] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:26:08.637 [2024-10-01 12:45:50.922286] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:26:08.637 [2024-10-01 12:45:50.922414] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:26:08.637 [2024-10-01 12:45:50.922685] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.637 12:45:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.637 12:45:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:08.637 "name": "raid_bdev1", 00:26:08.637 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:08.637 "strip_size_kb": 64, 00:26:08.637 "state": "online", 00:26:08.637 "raid_level": "raid5f", 00:26:08.637 "superblock": false, 00:26:08.637 "num_base_bdevs": 3, 00:26:08.637 "num_base_bdevs_discovered": 3, 00:26:08.637 "num_base_bdevs_operational": 3, 00:26:08.637 "base_bdevs_list": [ 00:26:08.637 { 00:26:08.637 "name": "BaseBdev1", 00:26:08.637 "uuid": "2fdac839-bc74-4382-97b3-e9fa2ab1d415", 00:26:08.637 "is_configured": true, 00:26:08.637 "data_offset": 0, 00:26:08.637 "data_size": 65536 00:26:08.637 }, 00:26:08.637 { 00:26:08.637 "name": "BaseBdev2", 00:26:08.637 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:08.637 "is_configured": true, 00:26:08.637 "data_offset": 0, 00:26:08.638 "data_size": 65536 00:26:08.638 }, 00:26:08.638 { 00:26:08.638 "name": "BaseBdev3", 00:26:08.638 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:08.638 "is_configured": true, 00:26:08.638 "data_offset": 0, 00:26:08.638 "data_size": 65536 00:26:08.638 } 00:26:08.638 ] 00:26:08.638 }' 00:26:08.638 12:45:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:08.638 12:45:51 -- common/autotest_common.sh@10 -- # set +x 00:26:09.204 12:45:51 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:09.204 12:45:51 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:09.490 [2024-10-01 12:45:51.772116] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:09.490 12:45:51 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:26:09.490 12:45:51 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.490 12:45:51 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:09.490 12:45:51 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:26:09.490 12:45:51 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:09.490 12:45:51 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:09.490 12:45:51 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:09.490 12:45:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:09.490 12:45:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:09.490 12:45:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:09.490 12:45:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:09.490 12:45:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:09.490 12:45:51 -- bdev/nbd_common.sh@12 -- # local i 00:26:09.490 12:45:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:09.490 12:45:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:09.490 12:45:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:09.749 [2024-10-01 12:45:52.159482] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:09.749 /dev/nbd0 00:26:09.749 12:45:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:09.749 12:45:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:09.749 12:45:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:09.749 12:45:52 -- common/autotest_common.sh@857 -- # local i 00:26:09.749 12:45:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:09.749 12:45:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:09.749 12:45:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:09.749 12:45:52 -- common/autotest_common.sh@861 -- # break 00:26:09.749 12:45:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:09.749 12:45:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:09.749 12:45:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:09.749 1+0 records in 00:26:09.749 1+0 records out 00:26:09.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564257 s, 7.3 MB/s 00:26:09.749 12:45:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:09.749 12:45:52 -- common/autotest_common.sh@874 -- # size=4096 00:26:09.749 12:45:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:09.749 12:45:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:09.749 12:45:52 -- common/autotest_common.sh@877 -- # return 0 00:26:09.749 12:45:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:09.749 12:45:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:09.749 12:45:52 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:09.749 12:45:52 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:26:09.749 12:45:52 -- bdev/bdev_raid.sh@582 -- # echo 128 00:26:09.749 12:45:52 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:26:10.318 512+0 records in 00:26:10.318 512+0 records out 00:26:10.318 67108864 bytes (67 MB, 64 MiB) copied, 0.371077 s, 181 MB/s 00:26:10.318 12:45:52 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@51 -- # local i 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:10.318 [2024-10-01 12:45:52.824155] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@41 -- # break 00:26:10.318 12:45:52 -- bdev/nbd_common.sh@45 -- # return 0 00:26:10.318 12:45:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:10.577 [2024-10-01 12:45:52.979073] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.577 12:45:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.836 12:45:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:10.836 "name": "raid_bdev1", 00:26:10.836 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:10.836 "strip_size_kb": 64, 00:26:10.836 "state": "online", 00:26:10.836 "raid_level": "raid5f", 00:26:10.836 "superblock": false, 00:26:10.836 "num_base_bdevs": 3, 00:26:10.836 "num_base_bdevs_discovered": 2, 00:26:10.836 "num_base_bdevs_operational": 2, 00:26:10.836 "base_bdevs_list": [ 00:26:10.836 { 00:26:10.836 "name": null, 00:26:10.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.836 "is_configured": false, 00:26:10.836 "data_offset": 0, 00:26:10.836 "data_size": 65536 00:26:10.836 }, 00:26:10.836 { 00:26:10.836 "name": "BaseBdev2", 00:26:10.836 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:10.836 "is_configured": true, 00:26:10.836 "data_offset": 0, 00:26:10.836 "data_size": 65536 00:26:10.836 }, 00:26:10.836 { 00:26:10.836 "name": "BaseBdev3", 00:26:10.836 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:10.836 "is_configured": true, 00:26:10.836 "data_offset": 0, 00:26:10.836 "data_size": 65536 00:26:10.836 } 00:26:10.836 ] 00:26:10.836 }' 00:26:10.836 12:45:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:10.836 12:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:11.404 12:45:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:11.404 [2024-10-01 12:45:53.902115] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:11.404 [2024-10-01 12:45:53.902285] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:11.404 [2024-10-01 12:45:53.920979] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:26:11.404 [2024-10-01 12:45:53.929698] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:11.404 12:45:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:12.783 12:45:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:12.783 12:45:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:12.783 12:45:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:12.783 12:45:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:12.783 12:45:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:12.783 12:45:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.783 12:45:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.783 12:45:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:12.783 "name": "raid_bdev1", 00:26:12.783 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:12.783 "strip_size_kb": 64, 00:26:12.783 "state": "online", 00:26:12.783 "raid_level": "raid5f", 00:26:12.783 "superblock": false, 00:26:12.783 "num_base_bdevs": 3, 00:26:12.783 "num_base_bdevs_discovered": 3, 00:26:12.783 "num_base_bdevs_operational": 3, 00:26:12.783 "process": { 00:26:12.783 "type": "rebuild", 00:26:12.783 "target": "spare", 00:26:12.783 "progress": { 00:26:12.783 "blocks": 22528, 00:26:12.783 "percent": 17 00:26:12.783 } 00:26:12.783 }, 00:26:12.783 "base_bdevs_list": [ 00:26:12.783 { 00:26:12.783 "name": "spare", 00:26:12.783 "uuid": "6e47a91d-0c25-5d1d-b984-52834e6a4e98", 00:26:12.783 "is_configured": true, 00:26:12.783 "data_offset": 0, 00:26:12.783 "data_size": 65536 00:26:12.783 }, 00:26:12.783 { 00:26:12.783 "name": "BaseBdev2", 00:26:12.783 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:12.783 "is_configured": true, 00:26:12.783 "data_offset": 0, 00:26:12.783 "data_size": 65536 00:26:12.783 }, 00:26:12.783 { 00:26:12.783 "name": "BaseBdev3", 00:26:12.783 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:12.783 "is_configured": true, 00:26:12.783 "data_offset": 0, 00:26:12.783 "data_size": 65536 00:26:12.783 } 00:26:12.783 ] 00:26:12.783 }' 00:26:12.783 12:45:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:12.783 12:45:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:12.783 12:45:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:12.783 12:45:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:12.783 12:45:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:13.042 [2024-10-01 12:45:55.392221] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:13.042 [2024-10-01 12:45:55.438106] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:13.042 [2024-10-01 12:45:55.438291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.042 12:45:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.301 12:45:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:13.301 "name": "raid_bdev1", 00:26:13.301 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:13.301 "strip_size_kb": 64, 00:26:13.301 "state": "online", 00:26:13.301 "raid_level": "raid5f", 00:26:13.301 "superblock": false, 00:26:13.301 "num_base_bdevs": 3, 00:26:13.301 "num_base_bdevs_discovered": 2, 00:26:13.301 "num_base_bdevs_operational": 2, 00:26:13.301 "base_bdevs_list": [ 00:26:13.301 { 00:26:13.301 "name": null, 00:26:13.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.301 "is_configured": false, 00:26:13.301 "data_offset": 0, 00:26:13.301 "data_size": 65536 00:26:13.301 }, 00:26:13.301 { 00:26:13.301 "name": "BaseBdev2", 00:26:13.301 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:13.301 "is_configured": true, 00:26:13.301 "data_offset": 0, 00:26:13.301 "data_size": 65536 00:26:13.301 }, 00:26:13.301 { 00:26:13.301 "name": "BaseBdev3", 00:26:13.301 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:13.301 "is_configured": true, 00:26:13.301 "data_offset": 0, 00:26:13.301 "data_size": 65536 00:26:13.301 } 00:26:13.301 ] 00:26:13.301 }' 00:26:13.301 12:45:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:13.301 12:45:55 -- common/autotest_common.sh@10 -- # set +x 00:26:13.869 12:45:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:13.869 12:45:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:13.869 12:45:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:13.869 12:45:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:13.869 12:45:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:13.869 12:45:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.869 12:45:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.128 12:45:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:14.128 "name": "raid_bdev1", 00:26:14.128 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:14.128 "strip_size_kb": 64, 00:26:14.128 "state": "online", 00:26:14.128 "raid_level": "raid5f", 00:26:14.128 "superblock": false, 00:26:14.128 "num_base_bdevs": 3, 00:26:14.128 "num_base_bdevs_discovered": 2, 00:26:14.128 "num_base_bdevs_operational": 2, 00:26:14.128 "base_bdevs_list": [ 00:26:14.128 { 00:26:14.128 "name": null, 00:26:14.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.128 "is_configured": false, 00:26:14.128 "data_offset": 0, 00:26:14.128 "data_size": 65536 00:26:14.128 }, 00:26:14.128 { 00:26:14.128 "name": "BaseBdev2", 00:26:14.128 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:14.128 "is_configured": true, 00:26:14.128 "data_offset": 0, 00:26:14.128 "data_size": 65536 00:26:14.128 }, 00:26:14.128 { 00:26:14.128 "name": "BaseBdev3", 00:26:14.128 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:14.128 "is_configured": true, 00:26:14.128 "data_offset": 0, 00:26:14.128 "data_size": 65536 00:26:14.128 } 00:26:14.128 ] 00:26:14.128 }' 00:26:14.128 12:45:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:14.129 12:45:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:14.129 12:45:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:14.129 12:45:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:14.129 12:45:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:14.387 [2024-10-01 12:45:56.710716] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:14.388 [2024-10-01 12:45:56.710880] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:14.388 [2024-10-01 12:45:56.727683] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:26:14.388 [2024-10-01 12:45:56.748270] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:14.388 12:45:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:15.322 12:45:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:15.322 12:45:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:15.322 12:45:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:15.322 12:45:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:15.322 12:45:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:15.322 12:45:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.322 12:45:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.581 12:45:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:15.581 "name": "raid_bdev1", 00:26:15.581 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:15.581 "strip_size_kb": 64, 00:26:15.581 "state": "online", 00:26:15.581 "raid_level": "raid5f", 00:26:15.581 "superblock": false, 00:26:15.581 "num_base_bdevs": 3, 00:26:15.581 "num_base_bdevs_discovered": 3, 00:26:15.581 "num_base_bdevs_operational": 3, 00:26:15.581 "process": { 00:26:15.581 "type": "rebuild", 00:26:15.581 "target": "spare", 00:26:15.581 "progress": { 00:26:15.581 "blocks": 24576, 00:26:15.581 "percent": 18 00:26:15.581 } 00:26:15.581 }, 00:26:15.581 "base_bdevs_list": [ 00:26:15.581 { 00:26:15.581 "name": "spare", 00:26:15.581 "uuid": "6e47a91d-0c25-5d1d-b984-52834e6a4e98", 00:26:15.581 "is_configured": true, 00:26:15.581 "data_offset": 0, 00:26:15.581 "data_size": 65536 00:26:15.581 }, 00:26:15.581 { 00:26:15.581 "name": "BaseBdev2", 00:26:15.581 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:15.581 "is_configured": true, 00:26:15.581 "data_offset": 0, 00:26:15.581 "data_size": 65536 00:26:15.581 }, 00:26:15.581 { 00:26:15.581 "name": "BaseBdev3", 00:26:15.581 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:15.581 "is_configured": true, 00:26:15.581 "data_offset": 0, 00:26:15.581 "data_size": 65536 00:26:15.581 } 00:26:15.581 ] 00:26:15.581 }' 00:26:15.581 12:45:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@657 -- # local timeout=545 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.582 12:45:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.841 12:45:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:15.841 "name": "raid_bdev1", 00:26:15.841 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:15.841 "strip_size_kb": 64, 00:26:15.841 "state": "online", 00:26:15.841 "raid_level": "raid5f", 00:26:15.841 "superblock": false, 00:26:15.841 "num_base_bdevs": 3, 00:26:15.841 "num_base_bdevs_discovered": 3, 00:26:15.841 "num_base_bdevs_operational": 3, 00:26:15.841 "process": { 00:26:15.841 "type": "rebuild", 00:26:15.841 "target": "spare", 00:26:15.841 "progress": { 00:26:15.841 "blocks": 30720, 00:26:15.841 "percent": 23 00:26:15.841 } 00:26:15.841 }, 00:26:15.841 "base_bdevs_list": [ 00:26:15.841 { 00:26:15.841 "name": "spare", 00:26:15.841 "uuid": "6e47a91d-0c25-5d1d-b984-52834e6a4e98", 00:26:15.841 "is_configured": true, 00:26:15.841 "data_offset": 0, 00:26:15.841 "data_size": 65536 00:26:15.841 }, 00:26:15.841 { 00:26:15.841 "name": "BaseBdev2", 00:26:15.841 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:15.841 "is_configured": true, 00:26:15.841 "data_offset": 0, 00:26:15.841 "data_size": 65536 00:26:15.841 }, 00:26:15.841 { 00:26:15.841 "name": "BaseBdev3", 00:26:15.841 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:15.841 "is_configured": true, 00:26:15.841 "data_offset": 0, 00:26:15.841 "data_size": 65536 00:26:15.841 } 00:26:15.841 ] 00:26:15.841 }' 00:26:15.841 12:45:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:15.841 12:45:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:15.841 12:45:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:15.841 12:45:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:15.841 12:45:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:17.215 "name": "raid_bdev1", 00:26:17.215 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:17.215 "strip_size_kb": 64, 00:26:17.215 "state": "online", 00:26:17.215 "raid_level": "raid5f", 00:26:17.215 "superblock": false, 00:26:17.215 "num_base_bdevs": 3, 00:26:17.215 "num_base_bdevs_discovered": 3, 00:26:17.215 "num_base_bdevs_operational": 3, 00:26:17.215 "process": { 00:26:17.215 "type": "rebuild", 00:26:17.215 "target": "spare", 00:26:17.215 "progress": { 00:26:17.215 "blocks": 55296, 00:26:17.215 "percent": 42 00:26:17.215 } 00:26:17.215 }, 00:26:17.215 "base_bdevs_list": [ 00:26:17.215 { 00:26:17.215 "name": "spare", 00:26:17.215 "uuid": "6e47a91d-0c25-5d1d-b984-52834e6a4e98", 00:26:17.215 "is_configured": true, 00:26:17.215 "data_offset": 0, 00:26:17.215 "data_size": 65536 00:26:17.215 }, 00:26:17.215 { 00:26:17.215 "name": "BaseBdev2", 00:26:17.215 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:17.215 "is_configured": true, 00:26:17.215 "data_offset": 0, 00:26:17.215 "data_size": 65536 00:26:17.215 }, 00:26:17.215 { 00:26:17.215 "name": "BaseBdev3", 00:26:17.215 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:17.215 "is_configured": true, 00:26:17.215 "data_offset": 0, 00:26:17.215 "data_size": 65536 00:26:17.215 } 00:26:17.215 ] 00:26:17.215 }' 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:17.215 12:45:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:18.149 12:46:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:18.150 12:46:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:18.150 12:46:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:18.150 12:46:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:18.150 12:46:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:18.150 12:46:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:18.150 12:46:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.150 12:46:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.408 12:46:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:18.408 "name": "raid_bdev1", 00:26:18.408 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:18.408 "strip_size_kb": 64, 00:26:18.408 "state": "online", 00:26:18.408 "raid_level": "raid5f", 00:26:18.408 "superblock": false, 00:26:18.408 "num_base_bdevs": 3, 00:26:18.408 "num_base_bdevs_discovered": 3, 00:26:18.408 "num_base_bdevs_operational": 3, 00:26:18.408 "process": { 00:26:18.408 "type": "rebuild", 00:26:18.408 "target": "spare", 00:26:18.408 "progress": { 00:26:18.408 "blocks": 81920, 00:26:18.408 "percent": 62 00:26:18.408 } 00:26:18.408 }, 00:26:18.408 "base_bdevs_list": [ 00:26:18.408 { 00:26:18.408 "name": "spare", 00:26:18.408 "uuid": "6e47a91d-0c25-5d1d-b984-52834e6a4e98", 00:26:18.408 "is_configured": true, 00:26:18.408 "data_offset": 0, 00:26:18.408 "data_size": 65536 00:26:18.408 }, 00:26:18.408 { 00:26:18.408 "name": "BaseBdev2", 00:26:18.408 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:18.408 "is_configured": true, 00:26:18.408 "data_offset": 0, 00:26:18.408 "data_size": 65536 00:26:18.408 }, 00:26:18.408 { 00:26:18.408 "name": "BaseBdev3", 00:26:18.408 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:18.408 "is_configured": true, 00:26:18.408 "data_offset": 0, 00:26:18.408 "data_size": 65536 00:26:18.408 } 00:26:18.408 ] 00:26:18.408 }' 00:26:18.408 12:46:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:18.408 12:46:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:18.408 12:46:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:18.666 12:46:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:18.666 12:46:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:19.600 12:46:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:19.600 12:46:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:19.600 12:46:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:19.600 12:46:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:19.600 12:46:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:19.600 12:46:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:19.600 12:46:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.600 12:46:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.860 12:46:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:19.860 "name": "raid_bdev1", 00:26:19.860 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:19.860 "strip_size_kb": 64, 00:26:19.860 "state": "online", 00:26:19.860 "raid_level": "raid5f", 00:26:19.860 "superblock": false, 00:26:19.860 "num_base_bdevs": 3, 00:26:19.860 "num_base_bdevs_discovered": 3, 00:26:19.860 "num_base_bdevs_operational": 3, 00:26:19.860 "process": { 00:26:19.860 "type": "rebuild", 00:26:19.860 "target": "spare", 00:26:19.860 "progress": { 00:26:19.860 "blocks": 108544, 00:26:19.860 "percent": 82 00:26:19.860 } 00:26:19.860 }, 00:26:19.860 "base_bdevs_list": [ 00:26:19.860 { 00:26:19.860 "name": "spare", 00:26:19.860 "uuid": "6e47a91d-0c25-5d1d-b984-52834e6a4e98", 00:26:19.860 "is_configured": true, 00:26:19.860 "data_offset": 0, 00:26:19.860 "data_size": 65536 00:26:19.860 }, 00:26:19.860 { 00:26:19.860 "name": "BaseBdev2", 00:26:19.860 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:19.860 "is_configured": true, 00:26:19.860 "data_offset": 0, 00:26:19.860 "data_size": 65536 00:26:19.860 }, 00:26:19.860 { 00:26:19.860 "name": "BaseBdev3", 00:26:19.860 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:19.860 "is_configured": true, 00:26:19.860 "data_offset": 0, 00:26:19.860 "data_size": 65536 00:26:19.860 } 00:26:19.860 ] 00:26:19.860 }' 00:26:19.860 12:46:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:19.860 12:46:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:19.860 12:46:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:19.860 12:46:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:19.860 12:46:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:20.795 [2024-10-01 12:46:03.188859] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:20.795 [2024-10-01 12:46:03.189171] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:20.796 [2024-10-01 12:46:03.189366] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:20.796 12:46:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:20.796 12:46:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:20.796 12:46:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:20.796 12:46:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:20.796 12:46:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:20.796 12:46:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:20.796 12:46:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.796 12:46:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.054 12:46:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:21.054 "name": "raid_bdev1", 00:26:21.054 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:21.054 "strip_size_kb": 64, 00:26:21.054 "state": "online", 00:26:21.054 "raid_level": "raid5f", 00:26:21.054 "superblock": false, 00:26:21.054 "num_base_bdevs": 3, 00:26:21.054 "num_base_bdevs_discovered": 3, 00:26:21.054 "num_base_bdevs_operational": 3, 00:26:21.054 "base_bdevs_list": [ 00:26:21.054 { 00:26:21.054 "name": "spare", 00:26:21.054 "uuid": "6e47a91d-0c25-5d1d-b984-52834e6a4e98", 00:26:21.054 "is_configured": true, 00:26:21.054 "data_offset": 0, 00:26:21.054 "data_size": 65536 00:26:21.054 }, 00:26:21.054 { 00:26:21.054 "name": "BaseBdev2", 00:26:21.054 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:21.054 "is_configured": true, 00:26:21.054 "data_offset": 0, 00:26:21.054 "data_size": 65536 00:26:21.054 }, 00:26:21.054 { 00:26:21.054 "name": "BaseBdev3", 00:26:21.054 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:21.054 "is_configured": true, 00:26:21.054 "data_offset": 0, 00:26:21.054 "data_size": 65536 00:26:21.054 } 00:26:21.054 ] 00:26:21.054 }' 00:26:21.054 12:46:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:21.054 12:46:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:21.054 12:46:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:21.054 12:46:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:21.054 12:46:03 -- bdev/bdev_raid.sh@660 -- # break 00:26:21.054 12:46:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:21.054 12:46:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:21.055 12:46:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:21.055 12:46:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:21.055 12:46:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:21.055 12:46:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.055 12:46:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.313 12:46:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:21.313 "name": "raid_bdev1", 00:26:21.313 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:21.313 "strip_size_kb": 64, 00:26:21.313 "state": "online", 00:26:21.313 "raid_level": "raid5f", 00:26:21.313 "superblock": false, 00:26:21.313 "num_base_bdevs": 3, 00:26:21.313 "num_base_bdevs_discovered": 3, 00:26:21.313 "num_base_bdevs_operational": 3, 00:26:21.313 "base_bdevs_list": [ 00:26:21.313 { 00:26:21.313 "name": "spare", 00:26:21.313 "uuid": "6e47a91d-0c25-5d1d-b984-52834e6a4e98", 00:26:21.313 "is_configured": true, 00:26:21.313 "data_offset": 0, 00:26:21.313 "data_size": 65536 00:26:21.313 }, 00:26:21.313 { 00:26:21.313 "name": "BaseBdev2", 00:26:21.313 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:21.313 "is_configured": true, 00:26:21.313 "data_offset": 0, 00:26:21.313 "data_size": 65536 00:26:21.313 }, 00:26:21.313 { 00:26:21.313 "name": "BaseBdev3", 00:26:21.313 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:21.313 "is_configured": true, 00:26:21.313 "data_offset": 0, 00:26:21.313 "data_size": 65536 00:26:21.313 } 00:26:21.313 ] 00:26:21.313 }' 00:26:21.313 12:46:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:21.313 12:46:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:21.313 12:46:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:21.313 12:46:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:21.314 12:46:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:21.314 12:46:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:21.314 12:46:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:21.314 12:46:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:21.314 12:46:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:21.314 12:46:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:21.314 12:46:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:21.314 12:46:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:21.314 12:46:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:21.314 12:46:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:21.572 12:46:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.572 12:46:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.572 12:46:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:21.572 "name": "raid_bdev1", 00:26:21.572 "uuid": "7c604e23-f0a6-45a7-a88e-93e4a6e53853", 00:26:21.572 "strip_size_kb": 64, 00:26:21.572 "state": "online", 00:26:21.572 "raid_level": "raid5f", 00:26:21.572 "superblock": false, 00:26:21.572 "num_base_bdevs": 3, 00:26:21.572 "num_base_bdevs_discovered": 3, 00:26:21.572 "num_base_bdevs_operational": 3, 00:26:21.572 "base_bdevs_list": [ 00:26:21.572 { 00:26:21.572 "name": "spare", 00:26:21.572 "uuid": "6e47a91d-0c25-5d1d-b984-52834e6a4e98", 00:26:21.572 "is_configured": true, 00:26:21.572 "data_offset": 0, 00:26:21.572 "data_size": 65536 00:26:21.572 }, 00:26:21.572 { 00:26:21.572 "name": "BaseBdev2", 00:26:21.572 "uuid": "77937e31-6a4c-464f-b30e-f5dec2aef6da", 00:26:21.572 "is_configured": true, 00:26:21.572 "data_offset": 0, 00:26:21.572 "data_size": 65536 00:26:21.572 }, 00:26:21.572 { 00:26:21.572 "name": "BaseBdev3", 00:26:21.572 "uuid": "1cd791bf-7874-4666-a77d-9d54aa143507", 00:26:21.572 "is_configured": true, 00:26:21.572 "data_offset": 0, 00:26:21.572 "data_size": 65536 00:26:21.572 } 00:26:21.572 ] 00:26:21.572 }' 00:26:21.572 12:46:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:21.572 12:46:04 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 12:46:04 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:22.396 [2024-10-01 12:46:04.747101] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:22.396 [2024-10-01 12:46:04.747424] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:22.396 [2024-10-01 12:46:04.747707] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:22.396 [2024-10-01 12:46:04.747918] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:22.396 [2024-10-01 12:46:04.748008] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:26:22.396 12:46:04 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.396 12:46:04 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:22.655 12:46:04 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:22.655 12:46:04 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:22.655 12:46:04 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:22.655 12:46:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:22.655 12:46:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:22.655 12:46:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:22.655 12:46:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:22.655 12:46:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:22.655 12:46:04 -- bdev/nbd_common.sh@12 -- # local i 00:26:22.655 12:46:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:22.655 12:46:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:22.655 12:46:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:22.655 /dev/nbd0 00:26:22.655 12:46:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:22.914 12:46:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:22.914 12:46:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:22.914 12:46:05 -- common/autotest_common.sh@857 -- # local i 00:26:22.914 12:46:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:22.914 12:46:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:22.914 12:46:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:22.914 12:46:05 -- common/autotest_common.sh@861 -- # break 00:26:22.914 12:46:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:22.914 12:46:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:22.914 12:46:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:22.914 1+0 records in 00:26:22.914 1+0 records out 00:26:22.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583725 s, 7.0 MB/s 00:26:22.914 12:46:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:22.914 12:46:05 -- common/autotest_common.sh@874 -- # size=4096 00:26:22.914 12:46:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:22.914 12:46:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:22.914 12:46:05 -- common/autotest_common.sh@877 -- # return 0 00:26:22.914 12:46:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:22.914 12:46:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:22.914 12:46:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:22.914 /dev/nbd1 00:26:22.914 12:46:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:22.914 12:46:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:22.914 12:46:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:22.914 12:46:05 -- common/autotest_common.sh@857 -- # local i 00:26:22.914 12:46:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:22.914 12:46:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:22.914 12:46:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:23.173 12:46:05 -- common/autotest_common.sh@861 -- # break 00:26:23.173 12:46:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:23.173 12:46:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:23.173 12:46:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:23.173 1+0 records in 00:26:23.173 1+0 records out 00:26:23.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657417 s, 6.2 MB/s 00:26:23.173 12:46:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:23.173 12:46:05 -- common/autotest_common.sh@874 -- # size=4096 00:26:23.173 12:46:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:23.173 12:46:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:23.173 12:46:05 -- common/autotest_common.sh@877 -- # return 0 00:26:23.173 12:46:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:23.173 12:46:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:23.173 12:46:05 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:23.173 12:46:05 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:23.173 12:46:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:23.173 12:46:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:23.173 12:46:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:23.173 12:46:05 -- bdev/nbd_common.sh@51 -- # local i 00:26:23.173 12:46:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:23.173 12:46:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:23.432 12:46:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:23.432 12:46:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:23.432 12:46:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:23.432 12:46:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:23.432 12:46:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:23.432 12:46:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:23.432 12:46:05 -- bdev/nbd_common.sh@41 -- # break 00:26:23.432 12:46:05 -- bdev/nbd_common.sh@45 -- # return 0 00:26:23.432 12:46:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:23.432 12:46:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:23.690 12:46:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:23.691 12:46:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:23.691 12:46:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:23.691 12:46:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:23.691 12:46:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:23.691 12:46:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:23.691 12:46:06 -- bdev/nbd_common.sh@41 -- # break 00:26:23.691 12:46:06 -- bdev/nbd_common.sh@45 -- # return 0 00:26:23.691 12:46:06 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:26:23.691 12:46:06 -- bdev/bdev_raid.sh@709 -- # killprocess 128531 00:26:23.691 12:46:06 -- common/autotest_common.sh@926 -- # '[' -z 128531 ']' 00:26:23.691 12:46:06 -- common/autotest_common.sh@930 -- # kill -0 128531 00:26:23.691 12:46:06 -- common/autotest_common.sh@931 -- # uname 00:26:23.691 12:46:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:23.691 12:46:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128531 00:26:23.691 12:46:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:23.691 12:46:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:23.691 12:46:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128531' 00:26:23.691 killing process with pid 128531 00:26:23.691 12:46:06 -- common/autotest_common.sh@945 -- # kill 128531 00:26:23.691 Received shutdown signal, test time was about 60.000000 seconds 00:26:23.691 00:26:23.691 Latency(us) 00:26:23.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.691 =================================================================================================================== 00:26:23.691 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:23.691 12:46:06 -- common/autotest_common.sh@950 -- # wait 128531 00:26:23.691 [2024-10-01 12:46:06.141418] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:24.257 [2024-10-01 12:46:06.582664] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:25.633 ************************************ 00:26:25.633 END TEST raid5f_rebuild_test 00:26:25.633 ************************************ 00:26:25.633 12:46:08 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:25.633 00:26:25.633 real 0m19.558s 00:26:25.633 user 0m27.572s 00:26:25.633 sys 0m2.824s 00:26:25.633 12:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.633 12:46:08 -- common/autotest_common.sh@10 -- # set +x 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:26:25.634 12:46:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:26:25.634 12:46:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:25.634 12:46:08 -- common/autotest_common.sh@10 -- # set +x 00:26:25.634 ************************************ 00:26:25.634 START TEST raid5f_rebuild_test_sb 00:26:25.634 ************************************ 00:26:25.634 12:46:08 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@544 -- # raid_pid=129071 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129071 /var/tmp/spdk-raid.sock 00:26:25.634 12:46:08 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:25.634 12:46:08 -- common/autotest_common.sh@819 -- # '[' -z 129071 ']' 00:26:25.634 12:46:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:25.892 12:46:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:25.892 12:46:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:25.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:25.892 12:46:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:25.892 12:46:08 -- common/autotest_common.sh@10 -- # set +x 00:26:25.892 [2024-10-01 12:46:08.226703] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:25.892 [2024-10-01 12:46:08.227047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129071 ] 00:26:25.892 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:25.892 Zero copy mechanism will not be used. 00:26:25.892 [2024-10-01 12:46:08.410577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.150 [2024-10-01 12:46:08.677078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.407 [2024-10-01 12:46:08.937713] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:27.338 12:46:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:27.338 12:46:09 -- common/autotest_common.sh@852 -- # return 0 00:26:27.338 12:46:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:27.339 12:46:09 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:27.339 12:46:09 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:27.595 BaseBdev1_malloc 00:26:27.595 12:46:09 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:27.851 [2024-10-01 12:46:10.165742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:27.851 [2024-10-01 12:46:10.166164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.851 [2024-10-01 12:46:10.166241] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:26:27.851 [2024-10-01 12:46:10.166367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.851 [2024-10-01 12:46:10.169283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.851 [2024-10-01 12:46:10.169460] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:27.851 BaseBdev1 00:26:27.851 12:46:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:27.851 12:46:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:27.851 12:46:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:28.108 BaseBdev2_malloc 00:26:28.108 12:46:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:28.108 [2024-10-01 12:46:10.615041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:28.108 [2024-10-01 12:46:10.615435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:28.108 [2024-10-01 12:46:10.615525] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:28.108 [2024-10-01 12:46:10.615670] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:28.108 [2024-10-01 12:46:10.618384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:28.108 [2024-10-01 12:46:10.618570] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:28.108 BaseBdev2 00:26:28.108 12:46:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:28.108 12:46:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:28.108 12:46:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:28.365 BaseBdev3_malloc 00:26:28.365 12:46:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:28.622 [2024-10-01 12:46:11.061144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:28.622 [2024-10-01 12:46:11.061584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:28.622 [2024-10-01 12:46:11.061668] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:26:28.622 [2024-10-01 12:46:11.061798] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:28.622 [2024-10-01 12:46:11.064445] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:28.622 [2024-10-01 12:46:11.064626] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:28.622 BaseBdev3 00:26:28.622 12:46:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:28.880 spare_malloc 00:26:28.880 12:46:11 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:29.138 spare_delay 00:26:29.138 12:46:11 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:29.397 [2024-10-01 12:46:11.691679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:29.397 [2024-10-01 12:46:11.692087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.397 [2024-10-01 12:46:11.692165] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:29.397 [2024-10-01 12:46:11.692287] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.397 [2024-10-01 12:46:11.695047] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.397 [2024-10-01 12:46:11.695230] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:29.397 spare 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:26:29.397 [2024-10-01 12:46:11.883575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:29.397 [2024-10-01 12:46:11.886132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:29.397 [2024-10-01 12:46:11.886355] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:29.397 [2024-10-01 12:46:11.886602] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:26:29.397 [2024-10-01 12:46:11.886715] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:29.397 [2024-10-01 12:46:11.886932] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:26:29.397 [2024-10-01 12:46:11.893968] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:26:29.397 [2024-10-01 12:46:11.894086] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:26:29.397 [2024-10-01 12:46:11.894380] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.397 12:46:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.656 12:46:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:29.656 "name": "raid_bdev1", 00:26:29.656 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:29.656 "strip_size_kb": 64, 00:26:29.656 "state": "online", 00:26:29.656 "raid_level": "raid5f", 00:26:29.656 "superblock": true, 00:26:29.656 "num_base_bdevs": 3, 00:26:29.656 "num_base_bdevs_discovered": 3, 00:26:29.656 "num_base_bdevs_operational": 3, 00:26:29.656 "base_bdevs_list": [ 00:26:29.656 { 00:26:29.656 "name": "BaseBdev1", 00:26:29.656 "uuid": "6a1f65c4-9d28-5ca3-836c-ceeee0606aa1", 00:26:29.656 "is_configured": true, 00:26:29.656 "data_offset": 2048, 00:26:29.656 "data_size": 63488 00:26:29.656 }, 00:26:29.656 { 00:26:29.656 "name": "BaseBdev2", 00:26:29.656 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:29.656 "is_configured": true, 00:26:29.656 "data_offset": 2048, 00:26:29.656 "data_size": 63488 00:26:29.656 }, 00:26:29.656 { 00:26:29.656 "name": "BaseBdev3", 00:26:29.656 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:29.656 "is_configured": true, 00:26:29.656 "data_offset": 2048, 00:26:29.656 "data_size": 63488 00:26:29.656 } 00:26:29.656 ] 00:26:29.656 }' 00:26:29.656 12:46:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:29.656 12:46:12 -- common/autotest_common.sh@10 -- # set +x 00:26:30.222 12:46:12 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:30.222 12:46:12 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:30.480 [2024-10-01 12:46:12.824260] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:30.480 12:46:12 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:26:30.480 12:46:12 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.480 12:46:12 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:30.739 12:46:13 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:26:30.739 12:46:13 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:30.739 12:46:13 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:30.739 12:46:13 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:30.739 12:46:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:30.739 12:46:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:30.739 12:46:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:30.739 12:46:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:30.739 12:46:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:30.739 12:46:13 -- bdev/nbd_common.sh@12 -- # local i 00:26:30.739 12:46:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:30.739 12:46:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:30.739 12:46:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:30.739 [2024-10-01 12:46:13.219739] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:26:30.739 /dev/nbd0 00:26:30.997 12:46:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:30.997 12:46:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:30.997 12:46:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:30.997 12:46:13 -- common/autotest_common.sh@857 -- # local i 00:26:30.997 12:46:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:30.997 12:46:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:30.997 12:46:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:30.997 12:46:13 -- common/autotest_common.sh@861 -- # break 00:26:30.997 12:46:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:30.997 12:46:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:30.997 12:46:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:30.997 1+0 records in 00:26:30.997 1+0 records out 00:26:30.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708551 s, 5.8 MB/s 00:26:30.997 12:46:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:30.997 12:46:13 -- common/autotest_common.sh@874 -- # size=4096 00:26:30.997 12:46:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:30.997 12:46:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:30.997 12:46:13 -- common/autotest_common.sh@877 -- # return 0 00:26:30.997 12:46:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:30.997 12:46:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:30.997 12:46:13 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:30.997 12:46:13 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:26:30.997 12:46:13 -- bdev/bdev_raid.sh@582 -- # echo 128 00:26:30.997 12:46:13 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:26:31.255 496+0 records in 00:26:31.255 496+0 records out 00:26:31.255 65011712 bytes (65 MB, 62 MiB) copied, 0.373867 s, 174 MB/s 00:26:31.255 12:46:13 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:31.255 12:46:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:31.255 12:46:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:31.255 12:46:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:31.255 12:46:13 -- bdev/nbd_common.sh@51 -- # local i 00:26:31.255 12:46:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:31.255 12:46:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:31.512 [2024-10-01 12:46:13.910074] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.512 12:46:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:31.512 12:46:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:31.512 12:46:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:31.512 12:46:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.513 12:46:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.513 12:46:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:31.513 12:46:13 -- bdev/nbd_common.sh@41 -- # break 00:26:31.513 12:46:13 -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.513 12:46:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:31.770 [2024-10-01 12:46:14.109702] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:31.770 12:46:14 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:31.770 12:46:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:31.770 12:46:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:31.770 12:46:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:31.770 12:46:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:31.770 12:46:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:31.770 12:46:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:31.770 12:46:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:31.770 12:46:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:31.771 12:46:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:31.771 12:46:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.771 12:46:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.031 12:46:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:32.031 "name": "raid_bdev1", 00:26:32.031 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:32.031 "strip_size_kb": 64, 00:26:32.031 "state": "online", 00:26:32.031 "raid_level": "raid5f", 00:26:32.031 "superblock": true, 00:26:32.031 "num_base_bdevs": 3, 00:26:32.031 "num_base_bdevs_discovered": 2, 00:26:32.031 "num_base_bdevs_operational": 2, 00:26:32.031 "base_bdevs_list": [ 00:26:32.031 { 00:26:32.031 "name": null, 00:26:32.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.031 "is_configured": false, 00:26:32.031 "data_offset": 2048, 00:26:32.031 "data_size": 63488 00:26:32.031 }, 00:26:32.031 { 00:26:32.031 "name": "BaseBdev2", 00:26:32.031 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:32.031 "is_configured": true, 00:26:32.031 "data_offset": 2048, 00:26:32.031 "data_size": 63488 00:26:32.031 }, 00:26:32.031 { 00:26:32.031 "name": "BaseBdev3", 00:26:32.031 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:32.031 "is_configured": true, 00:26:32.031 "data_offset": 2048, 00:26:32.031 "data_size": 63488 00:26:32.031 } 00:26:32.031 ] 00:26:32.031 }' 00:26:32.031 12:46:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:32.031 12:46:14 -- common/autotest_common.sh@10 -- # set +x 00:26:32.599 12:46:14 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:32.599 [2024-10-01 12:46:15.064300] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:32.599 [2024-10-01 12:46:15.064608] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:32.599 [2024-10-01 12:46:15.085154] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028b70 00:26:32.599 [2024-10-01 12:46:15.094523] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:32.599 12:46:15 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:33.977 "name": "raid_bdev1", 00:26:33.977 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:33.977 "strip_size_kb": 64, 00:26:33.977 "state": "online", 00:26:33.977 "raid_level": "raid5f", 00:26:33.977 "superblock": true, 00:26:33.977 "num_base_bdevs": 3, 00:26:33.977 "num_base_bdevs_discovered": 3, 00:26:33.977 "num_base_bdevs_operational": 3, 00:26:33.977 "process": { 00:26:33.977 "type": "rebuild", 00:26:33.977 "target": "spare", 00:26:33.977 "progress": { 00:26:33.977 "blocks": 22528, 00:26:33.977 "percent": 17 00:26:33.977 } 00:26:33.977 }, 00:26:33.977 "base_bdevs_list": [ 00:26:33.977 { 00:26:33.977 "name": "spare", 00:26:33.977 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:33.977 "is_configured": true, 00:26:33.977 "data_offset": 2048, 00:26:33.977 "data_size": 63488 00:26:33.977 }, 00:26:33.977 { 00:26:33.977 "name": "BaseBdev2", 00:26:33.977 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:33.977 "is_configured": true, 00:26:33.977 "data_offset": 2048, 00:26:33.977 "data_size": 63488 00:26:33.977 }, 00:26:33.977 { 00:26:33.977 "name": "BaseBdev3", 00:26:33.977 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:33.977 "is_configured": true, 00:26:33.977 "data_offset": 2048, 00:26:33.977 "data_size": 63488 00:26:33.977 } 00:26:33.977 ] 00:26:33.977 }' 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:33.977 12:46:16 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:34.236 [2024-10-01 12:46:16.577477] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:34.237 [2024-10-01 12:46:16.608513] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:34.237 [2024-10-01 12:46:16.608798] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.237 12:46:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.496 12:46:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:34.496 "name": "raid_bdev1", 00:26:34.496 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:34.496 "strip_size_kb": 64, 00:26:34.496 "state": "online", 00:26:34.496 "raid_level": "raid5f", 00:26:34.496 "superblock": true, 00:26:34.496 "num_base_bdevs": 3, 00:26:34.496 "num_base_bdevs_discovered": 2, 00:26:34.496 "num_base_bdevs_operational": 2, 00:26:34.496 "base_bdevs_list": [ 00:26:34.496 { 00:26:34.496 "name": null, 00:26:34.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.496 "is_configured": false, 00:26:34.496 "data_offset": 2048, 00:26:34.496 "data_size": 63488 00:26:34.496 }, 00:26:34.496 { 00:26:34.496 "name": "BaseBdev2", 00:26:34.496 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:34.496 "is_configured": true, 00:26:34.496 "data_offset": 2048, 00:26:34.496 "data_size": 63488 00:26:34.496 }, 00:26:34.496 { 00:26:34.496 "name": "BaseBdev3", 00:26:34.496 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:34.496 "is_configured": true, 00:26:34.496 "data_offset": 2048, 00:26:34.496 "data_size": 63488 00:26:34.496 } 00:26:34.496 ] 00:26:34.496 }' 00:26:34.496 12:46:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:34.496 12:46:16 -- common/autotest_common.sh@10 -- # set +x 00:26:35.066 12:46:17 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:35.066 12:46:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:35.066 12:46:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:35.066 12:46:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:35.066 12:46:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:35.066 12:46:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.066 12:46:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.325 12:46:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:35.325 "name": "raid_bdev1", 00:26:35.325 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:35.325 "strip_size_kb": 64, 00:26:35.325 "state": "online", 00:26:35.325 "raid_level": "raid5f", 00:26:35.325 "superblock": true, 00:26:35.325 "num_base_bdevs": 3, 00:26:35.325 "num_base_bdevs_discovered": 2, 00:26:35.325 "num_base_bdevs_operational": 2, 00:26:35.325 "base_bdevs_list": [ 00:26:35.325 { 00:26:35.325 "name": null, 00:26:35.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.325 "is_configured": false, 00:26:35.325 "data_offset": 2048, 00:26:35.325 "data_size": 63488 00:26:35.325 }, 00:26:35.325 { 00:26:35.325 "name": "BaseBdev2", 00:26:35.325 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:35.325 "is_configured": true, 00:26:35.325 "data_offset": 2048, 00:26:35.325 "data_size": 63488 00:26:35.325 }, 00:26:35.325 { 00:26:35.325 "name": "BaseBdev3", 00:26:35.325 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:35.325 "is_configured": true, 00:26:35.325 "data_offset": 2048, 00:26:35.325 "data_size": 63488 00:26:35.325 } 00:26:35.325 ] 00:26:35.325 }' 00:26:35.325 12:46:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:35.325 12:46:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:35.325 12:46:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:35.325 12:46:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:35.325 12:46:17 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:35.585 [2024-10-01 12:46:17.895628] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:35.585 [2024-10-01 12:46:17.896010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:35.585 [2024-10-01 12:46:17.914951] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028d10 00:26:35.585 [2024-10-01 12:46:17.923581] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:35.585 12:46:17 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:36.523 12:46:18 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:36.523 12:46:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:36.523 12:46:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:36.523 12:46:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:36.523 12:46:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:36.523 12:46:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.523 12:46:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.781 12:46:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:36.781 "name": "raid_bdev1", 00:26:36.781 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:36.781 "strip_size_kb": 64, 00:26:36.781 "state": "online", 00:26:36.781 "raid_level": "raid5f", 00:26:36.781 "superblock": true, 00:26:36.781 "num_base_bdevs": 3, 00:26:36.781 "num_base_bdevs_discovered": 3, 00:26:36.781 "num_base_bdevs_operational": 3, 00:26:36.781 "process": { 00:26:36.781 "type": "rebuild", 00:26:36.781 "target": "spare", 00:26:36.781 "progress": { 00:26:36.781 "blocks": 22528, 00:26:36.781 "percent": 17 00:26:36.781 } 00:26:36.781 }, 00:26:36.781 "base_bdevs_list": [ 00:26:36.781 { 00:26:36.781 "name": "spare", 00:26:36.781 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:36.781 "is_configured": true, 00:26:36.781 "data_offset": 2048, 00:26:36.781 "data_size": 63488 00:26:36.781 }, 00:26:36.781 { 00:26:36.781 "name": "BaseBdev2", 00:26:36.781 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:36.781 "is_configured": true, 00:26:36.781 "data_offset": 2048, 00:26:36.781 "data_size": 63488 00:26:36.781 }, 00:26:36.781 { 00:26:36.781 "name": "BaseBdev3", 00:26:36.781 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:36.781 "is_configured": true, 00:26:36.781 "data_offset": 2048, 00:26:36.781 "data_size": 63488 00:26:36.781 } 00:26:36.781 ] 00:26:36.781 }' 00:26:36.781 12:46:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:36.781 12:46:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:36.781 12:46:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:36.781 12:46:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:36.781 12:46:19 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:26:36.781 12:46:19 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:26:36.781 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:26:36.781 12:46:19 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:26:36.782 12:46:19 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:36.782 12:46:19 -- bdev/bdev_raid.sh@657 -- # local timeout=566 00:26:36.782 12:46:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:36.782 12:46:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:36.782 12:46:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:36.782 12:46:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:36.782 12:46:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:36.782 12:46:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:36.782 12:46:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.782 12:46:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.040 12:46:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:37.040 "name": "raid_bdev1", 00:26:37.040 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:37.040 "strip_size_kb": 64, 00:26:37.040 "state": "online", 00:26:37.040 "raid_level": "raid5f", 00:26:37.040 "superblock": true, 00:26:37.040 "num_base_bdevs": 3, 00:26:37.040 "num_base_bdevs_discovered": 3, 00:26:37.040 "num_base_bdevs_operational": 3, 00:26:37.040 "process": { 00:26:37.040 "type": "rebuild", 00:26:37.040 "target": "spare", 00:26:37.040 "progress": { 00:26:37.040 "blocks": 28672, 00:26:37.040 "percent": 22 00:26:37.040 } 00:26:37.040 }, 00:26:37.040 "base_bdevs_list": [ 00:26:37.040 { 00:26:37.040 "name": "spare", 00:26:37.040 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:37.040 "is_configured": true, 00:26:37.040 "data_offset": 2048, 00:26:37.040 "data_size": 63488 00:26:37.040 }, 00:26:37.040 { 00:26:37.040 "name": "BaseBdev2", 00:26:37.040 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:37.040 "is_configured": true, 00:26:37.040 "data_offset": 2048, 00:26:37.040 "data_size": 63488 00:26:37.040 }, 00:26:37.040 { 00:26:37.040 "name": "BaseBdev3", 00:26:37.040 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:37.040 "is_configured": true, 00:26:37.040 "data_offset": 2048, 00:26:37.040 "data_size": 63488 00:26:37.040 } 00:26:37.040 ] 00:26:37.040 }' 00:26:37.040 12:46:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:37.040 12:46:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:37.040 12:46:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:37.040 12:46:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:37.040 12:46:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:37.978 12:46:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:37.978 12:46:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:37.978 12:46:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:37.978 12:46:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:37.979 12:46:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:37.979 12:46:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:38.238 12:46:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.238 12:46:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.238 12:46:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:38.238 "name": "raid_bdev1", 00:26:38.238 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:38.238 "strip_size_kb": 64, 00:26:38.238 "state": "online", 00:26:38.238 "raid_level": "raid5f", 00:26:38.238 "superblock": true, 00:26:38.238 "num_base_bdevs": 3, 00:26:38.238 "num_base_bdevs_discovered": 3, 00:26:38.238 "num_base_bdevs_operational": 3, 00:26:38.238 "process": { 00:26:38.238 "type": "rebuild", 00:26:38.238 "target": "spare", 00:26:38.238 "progress": { 00:26:38.238 "blocks": 55296, 00:26:38.238 "percent": 43 00:26:38.238 } 00:26:38.238 }, 00:26:38.238 "base_bdevs_list": [ 00:26:38.238 { 00:26:38.238 "name": "spare", 00:26:38.238 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:38.238 "is_configured": true, 00:26:38.238 "data_offset": 2048, 00:26:38.238 "data_size": 63488 00:26:38.238 }, 00:26:38.238 { 00:26:38.238 "name": "BaseBdev2", 00:26:38.238 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:38.238 "is_configured": true, 00:26:38.238 "data_offset": 2048, 00:26:38.238 "data_size": 63488 00:26:38.238 }, 00:26:38.238 { 00:26:38.239 "name": "BaseBdev3", 00:26:38.239 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:38.239 "is_configured": true, 00:26:38.239 "data_offset": 2048, 00:26:38.239 "data_size": 63488 00:26:38.239 } 00:26:38.239 ] 00:26:38.239 }' 00:26:38.239 12:46:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:38.239 12:46:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:38.239 12:46:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:38.498 12:46:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:38.498 12:46:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:39.434 12:46:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:39.434 12:46:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:39.434 12:46:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:39.434 12:46:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:39.434 12:46:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:39.434 12:46:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:39.434 12:46:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.434 12:46:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.693 12:46:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:39.693 "name": "raid_bdev1", 00:26:39.693 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:39.693 "strip_size_kb": 64, 00:26:39.693 "state": "online", 00:26:39.693 "raid_level": "raid5f", 00:26:39.693 "superblock": true, 00:26:39.693 "num_base_bdevs": 3, 00:26:39.693 "num_base_bdevs_discovered": 3, 00:26:39.693 "num_base_bdevs_operational": 3, 00:26:39.693 "process": { 00:26:39.693 "type": "rebuild", 00:26:39.693 "target": "spare", 00:26:39.693 "progress": { 00:26:39.693 "blocks": 81920, 00:26:39.693 "percent": 64 00:26:39.693 } 00:26:39.693 }, 00:26:39.693 "base_bdevs_list": [ 00:26:39.693 { 00:26:39.693 "name": "spare", 00:26:39.693 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:39.693 "is_configured": true, 00:26:39.693 "data_offset": 2048, 00:26:39.693 "data_size": 63488 00:26:39.693 }, 00:26:39.693 { 00:26:39.693 "name": "BaseBdev2", 00:26:39.693 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:39.693 "is_configured": true, 00:26:39.693 "data_offset": 2048, 00:26:39.693 "data_size": 63488 00:26:39.693 }, 00:26:39.693 { 00:26:39.693 "name": "BaseBdev3", 00:26:39.693 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:39.693 "is_configured": true, 00:26:39.693 "data_offset": 2048, 00:26:39.693 "data_size": 63488 00:26:39.693 } 00:26:39.693 ] 00:26:39.693 }' 00:26:39.693 12:46:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:39.693 12:46:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:39.693 12:46:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:39.693 12:46:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:39.693 12:46:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:40.628 12:46:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:40.628 12:46:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:40.629 12:46:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:40.629 12:46:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:40.629 12:46:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:40.629 12:46:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:40.629 12:46:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.629 12:46:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.887 12:46:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:40.887 "name": "raid_bdev1", 00:26:40.887 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:40.887 "strip_size_kb": 64, 00:26:40.887 "state": "online", 00:26:40.887 "raid_level": "raid5f", 00:26:40.887 "superblock": true, 00:26:40.887 "num_base_bdevs": 3, 00:26:40.887 "num_base_bdevs_discovered": 3, 00:26:40.887 "num_base_bdevs_operational": 3, 00:26:40.887 "process": { 00:26:40.887 "type": "rebuild", 00:26:40.887 "target": "spare", 00:26:40.887 "progress": { 00:26:40.887 "blocks": 106496, 00:26:40.887 "percent": 83 00:26:40.887 } 00:26:40.887 }, 00:26:40.887 "base_bdevs_list": [ 00:26:40.887 { 00:26:40.887 "name": "spare", 00:26:40.887 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:40.887 "is_configured": true, 00:26:40.887 "data_offset": 2048, 00:26:40.887 "data_size": 63488 00:26:40.887 }, 00:26:40.887 { 00:26:40.887 "name": "BaseBdev2", 00:26:40.887 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:40.887 "is_configured": true, 00:26:40.887 "data_offset": 2048, 00:26:40.887 "data_size": 63488 00:26:40.887 }, 00:26:40.887 { 00:26:40.887 "name": "BaseBdev3", 00:26:40.887 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:40.887 "is_configured": true, 00:26:40.887 "data_offset": 2048, 00:26:40.887 "data_size": 63488 00:26:40.887 } 00:26:40.887 ] 00:26:40.887 }' 00:26:40.887 12:46:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:40.887 12:46:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:40.887 12:46:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:40.887 12:46:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:40.887 12:46:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:41.822 [2024-10-01 12:46:24.165626] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:41.822 [2024-10-01 12:46:24.165886] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:41.822 [2024-10-01 12:46:24.166127] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:42.080 "name": "raid_bdev1", 00:26:42.080 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:42.080 "strip_size_kb": 64, 00:26:42.080 "state": "online", 00:26:42.080 "raid_level": "raid5f", 00:26:42.080 "superblock": true, 00:26:42.080 "num_base_bdevs": 3, 00:26:42.080 "num_base_bdevs_discovered": 3, 00:26:42.080 "num_base_bdevs_operational": 3, 00:26:42.080 "base_bdevs_list": [ 00:26:42.080 { 00:26:42.080 "name": "spare", 00:26:42.080 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:42.080 "is_configured": true, 00:26:42.080 "data_offset": 2048, 00:26:42.080 "data_size": 63488 00:26:42.080 }, 00:26:42.080 { 00:26:42.080 "name": "BaseBdev2", 00:26:42.080 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:42.080 "is_configured": true, 00:26:42.080 "data_offset": 2048, 00:26:42.080 "data_size": 63488 00:26:42.080 }, 00:26:42.080 { 00:26:42.080 "name": "BaseBdev3", 00:26:42.080 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:42.080 "is_configured": true, 00:26:42.080 "data_offset": 2048, 00:26:42.080 "data_size": 63488 00:26:42.080 } 00:26:42.080 ] 00:26:42.080 }' 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:42.080 12:46:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@660 -- # break 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:42.376 "name": "raid_bdev1", 00:26:42.376 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:42.376 "strip_size_kb": 64, 00:26:42.376 "state": "online", 00:26:42.376 "raid_level": "raid5f", 00:26:42.376 "superblock": true, 00:26:42.376 "num_base_bdevs": 3, 00:26:42.376 "num_base_bdevs_discovered": 3, 00:26:42.376 "num_base_bdevs_operational": 3, 00:26:42.376 "base_bdevs_list": [ 00:26:42.376 { 00:26:42.376 "name": "spare", 00:26:42.376 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:42.376 "is_configured": true, 00:26:42.376 "data_offset": 2048, 00:26:42.376 "data_size": 63488 00:26:42.376 }, 00:26:42.376 { 00:26:42.376 "name": "BaseBdev2", 00:26:42.376 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:42.376 "is_configured": true, 00:26:42.376 "data_offset": 2048, 00:26:42.376 "data_size": 63488 00:26:42.376 }, 00:26:42.376 { 00:26:42.376 "name": "BaseBdev3", 00:26:42.376 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:42.376 "is_configured": true, 00:26:42.376 "data_offset": 2048, 00:26:42.376 "data_size": 63488 00:26:42.376 } 00:26:42.376 ] 00:26:42.376 }' 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:42.376 12:46:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.643 12:46:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.643 12:46:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:42.643 "name": "raid_bdev1", 00:26:42.643 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:42.643 "strip_size_kb": 64, 00:26:42.643 "state": "online", 00:26:42.643 "raid_level": "raid5f", 00:26:42.643 "superblock": true, 00:26:42.643 "num_base_bdevs": 3, 00:26:42.643 "num_base_bdevs_discovered": 3, 00:26:42.643 "num_base_bdevs_operational": 3, 00:26:42.643 "base_bdevs_list": [ 00:26:42.643 { 00:26:42.643 "name": "spare", 00:26:42.643 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:42.643 "is_configured": true, 00:26:42.643 "data_offset": 2048, 00:26:42.643 "data_size": 63488 00:26:42.643 }, 00:26:42.643 { 00:26:42.643 "name": "BaseBdev2", 00:26:42.643 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:42.643 "is_configured": true, 00:26:42.643 "data_offset": 2048, 00:26:42.643 "data_size": 63488 00:26:42.643 }, 00:26:42.643 { 00:26:42.643 "name": "BaseBdev3", 00:26:42.643 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:42.643 "is_configured": true, 00:26:42.643 "data_offset": 2048, 00:26:42.643 "data_size": 63488 00:26:42.643 } 00:26:42.643 ] 00:26:42.643 }' 00:26:42.643 12:46:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:42.643 12:46:25 -- common/autotest_common.sh@10 -- # set +x 00:26:43.210 12:46:25 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:43.469 [2024-10-01 12:46:25.818813] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:43.469 [2024-10-01 12:46:25.819018] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:43.469 [2024-10-01 12:46:25.819272] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:43.469 [2024-10-01 12:46:25.819454] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:43.469 [2024-10-01 12:46:25.819550] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:26:43.469 12:46:25 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:43.469 12:46:25 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.727 12:46:26 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:43.727 12:46:26 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:43.727 12:46:26 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@12 -- # local i 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:43.727 /dev/nbd0 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:43.727 12:46:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:43.727 12:46:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:43.727 12:46:26 -- common/autotest_common.sh@857 -- # local i 00:26:43.727 12:46:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:43.727 12:46:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:43.727 12:46:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:43.727 12:46:26 -- common/autotest_common.sh@861 -- # break 00:26:43.727 12:46:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:43.727 12:46:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:43.727 12:46:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:43.727 1+0 records in 00:26:43.727 1+0 records out 00:26:43.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603441 s, 6.8 MB/s 00:26:43.727 12:46:26 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:43.727 12:46:26 -- common/autotest_common.sh@874 -- # size=4096 00:26:43.727 12:46:26 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:43.727 12:46:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:43.727 12:46:26 -- common/autotest_common.sh@877 -- # return 0 00:26:43.728 12:46:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:43.728 12:46:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:43.728 12:46:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:43.986 /dev/nbd1 00:26:43.986 12:46:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:43.986 12:46:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:43.986 12:46:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:43.986 12:46:26 -- common/autotest_common.sh@857 -- # local i 00:26:43.986 12:46:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:43.986 12:46:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:43.986 12:46:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:43.986 12:46:26 -- common/autotest_common.sh@861 -- # break 00:26:43.986 12:46:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:43.986 12:46:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:43.986 12:46:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:43.986 1+0 records in 00:26:43.986 1+0 records out 00:26:43.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302448 s, 13.5 MB/s 00:26:43.986 12:46:26 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:43.986 12:46:26 -- common/autotest_common.sh@874 -- # size=4096 00:26:43.986 12:46:26 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:43.986 12:46:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:43.986 12:46:26 -- common/autotest_common.sh@877 -- # return 0 00:26:43.986 12:46:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:43.986 12:46:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:43.986 12:46:26 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:44.245 12:46:26 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:44.245 12:46:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:44.245 12:46:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:44.245 12:46:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:44.245 12:46:26 -- bdev/nbd_common.sh@51 -- # local i 00:26:44.245 12:46:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:44.245 12:46:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:44.503 12:46:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:44.503 12:46:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:44.503 12:46:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:44.503 12:46:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:44.503 12:46:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:44.503 12:46:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:44.503 12:46:26 -- bdev/nbd_common.sh@41 -- # break 00:26:44.503 12:46:26 -- bdev/nbd_common.sh@45 -- # return 0 00:26:44.503 12:46:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:44.503 12:46:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:44.762 12:46:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:44.762 12:46:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:44.762 12:46:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:44.762 12:46:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:44.762 12:46:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:44.762 12:46:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:44.762 12:46:27 -- bdev/nbd_common.sh@41 -- # break 00:26:44.762 12:46:27 -- bdev/nbd_common.sh@45 -- # return 0 00:26:44.762 12:46:27 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:26:44.762 12:46:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:44.762 12:46:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:26:44.762 12:46:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:45.021 12:46:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:45.280 [2024-10-01 12:46:27.570361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:45.280 [2024-10-01 12:46:27.570488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.280 [2024-10-01 12:46:27.570526] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:45.280 [2024-10-01 12:46:27.570559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.280 [2024-10-01 12:46:27.573401] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.280 [2024-10-01 12:46:27.573504] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:45.280 [2024-10-01 12:46:27.573635] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:45.280 [2024-10-01 12:46:27.573707] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:45.280 BaseBdev1 00:26:45.280 12:46:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:45.280 12:46:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:26:45.280 12:46:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:26:45.280 12:46:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:45.538 [2024-10-01 12:46:27.929816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:45.538 [2024-10-01 12:46:27.929912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.538 [2024-10-01 12:46:27.929958] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:45.538 [2024-10-01 12:46:27.929981] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.538 [2024-10-01 12:46:27.930521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.538 [2024-10-01 12:46:27.930586] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:45.538 [2024-10-01 12:46:27.930726] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:26:45.538 [2024-10-01 12:46:27.930747] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:26:45.538 [2024-10-01 12:46:27.930755] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:45.538 [2024-10-01 12:46:27.930781] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:26:45.538 [2024-10-01 12:46:27.930869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:45.538 BaseBdev2 00:26:45.538 12:46:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:45.538 12:46:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:26:45.538 12:46:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:26:45.797 12:46:28 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:45.797 [2024-10-01 12:46:28.297268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:45.797 [2024-10-01 12:46:28.297359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.797 [2024-10-01 12:46:28.297405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:45.797 [2024-10-01 12:46:28.297428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.797 [2024-10-01 12:46:28.297953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.797 [2024-10-01 12:46:28.298015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:45.797 [2024-10-01 12:46:28.298137] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:26:45.797 [2024-10-01 12:46:28.298158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:45.797 BaseBdev3 00:26:45.797 12:46:28 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:46.055 12:46:28 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:46.314 [2024-10-01 12:46:28.664750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:46.314 [2024-10-01 12:46:28.664840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:46.314 [2024-10-01 12:46:28.664879] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:46.314 [2024-10-01 12:46:28.664908] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:46.314 [2024-10-01 12:46:28.665441] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:46.315 [2024-10-01 12:46:28.665501] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:46.315 [2024-10-01 12:46:28.665626] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:46.315 [2024-10-01 12:46:28.665647] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:46.315 spare 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.315 12:46:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.315 [2024-10-01 12:46:28.765592] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:26:46.315 [2024-10-01 12:46:28.765617] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:46.315 [2024-10-01 12:46:28.765793] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:26:46.315 [2024-10-01 12:46:28.772377] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:26:46.315 [2024-10-01 12:46:28.772404] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:26:46.315 [2024-10-01 12:46:28.772605] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:46.574 12:46:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:46.574 "name": "raid_bdev1", 00:26:46.574 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:46.574 "strip_size_kb": 64, 00:26:46.574 "state": "online", 00:26:46.574 "raid_level": "raid5f", 00:26:46.574 "superblock": true, 00:26:46.574 "num_base_bdevs": 3, 00:26:46.574 "num_base_bdevs_discovered": 3, 00:26:46.574 "num_base_bdevs_operational": 3, 00:26:46.574 "base_bdevs_list": [ 00:26:46.574 { 00:26:46.574 "name": "spare", 00:26:46.574 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:46.574 "is_configured": true, 00:26:46.574 "data_offset": 2048, 00:26:46.574 "data_size": 63488 00:26:46.574 }, 00:26:46.574 { 00:26:46.574 "name": "BaseBdev2", 00:26:46.574 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:46.574 "is_configured": true, 00:26:46.574 "data_offset": 2048, 00:26:46.574 "data_size": 63488 00:26:46.574 }, 00:26:46.574 { 00:26:46.574 "name": "BaseBdev3", 00:26:46.574 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:46.574 "is_configured": true, 00:26:46.574 "data_offset": 2048, 00:26:46.574 "data_size": 63488 00:26:46.574 } 00:26:46.574 ] 00:26:46.574 }' 00:26:46.574 12:46:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:46.574 12:46:28 -- common/autotest_common.sh@10 -- # set +x 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:47.140 "name": "raid_bdev1", 00:26:47.140 "uuid": "e1d74a3d-db1a-4bd4-8cfd-3a3e8c8680e4", 00:26:47.140 "strip_size_kb": 64, 00:26:47.140 "state": "online", 00:26:47.140 "raid_level": "raid5f", 00:26:47.140 "superblock": true, 00:26:47.140 "num_base_bdevs": 3, 00:26:47.140 "num_base_bdevs_discovered": 3, 00:26:47.140 "num_base_bdevs_operational": 3, 00:26:47.140 "base_bdevs_list": [ 00:26:47.140 { 00:26:47.140 "name": "spare", 00:26:47.140 "uuid": "10674b14-917c-579c-a7f8-cb0ee52ac3b8", 00:26:47.140 "is_configured": true, 00:26:47.140 "data_offset": 2048, 00:26:47.140 "data_size": 63488 00:26:47.140 }, 00:26:47.140 { 00:26:47.140 "name": "BaseBdev2", 00:26:47.140 "uuid": "ca3504e0-8631-5bac-8290-80fa672c0cd7", 00:26:47.140 "is_configured": true, 00:26:47.140 "data_offset": 2048, 00:26:47.140 "data_size": 63488 00:26:47.140 }, 00:26:47.140 { 00:26:47.140 "name": "BaseBdev3", 00:26:47.140 "uuid": "aec61e33-9b69-5b49-8565-830774bbac38", 00:26:47.140 "is_configured": true, 00:26:47.140 "data_offset": 2048, 00:26:47.140 "data_size": 63488 00:26:47.140 } 00:26:47.140 ] 00:26:47.140 }' 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:47.140 12:46:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:47.397 12:46:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:47.397 12:46:29 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.397 12:46:29 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:47.397 12:46:29 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:26:47.397 12:46:29 -- bdev/bdev_raid.sh@709 -- # killprocess 129071 00:26:47.397 12:46:29 -- common/autotest_common.sh@926 -- # '[' -z 129071 ']' 00:26:47.397 12:46:29 -- common/autotest_common.sh@930 -- # kill -0 129071 00:26:47.397 12:46:29 -- common/autotest_common.sh@931 -- # uname 00:26:47.397 12:46:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:47.397 12:46:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129071 00:26:47.397 killing process with pid 129071 00:26:47.397 Received shutdown signal, test time was about 60.000000 seconds 00:26:47.397 00:26:47.397 Latency(us) 00:26:47.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.397 =================================================================================================================== 00:26:47.397 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:47.397 12:46:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:47.397 12:46:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:47.397 12:46:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129071' 00:26:47.397 12:46:29 -- common/autotest_common.sh@945 -- # kill 129071 00:26:47.397 12:46:29 -- common/autotest_common.sh@950 -- # wait 129071 00:26:47.397 [2024-10-01 12:46:29.909347] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:47.397 [2024-10-01 12:46:29.909440] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:47.397 [2024-10-01 12:46:29.909540] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:47.397 [2024-10-01 12:46:29.909556] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:26:47.966 [2024-10-01 12:46:30.322542] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:49.343 ************************************ 00:26:49.343 END TEST raid5f_rebuild_test_sb 00:26:49.343 ************************************ 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:49.343 00:26:49.343 real 0m23.574s 00:26:49.343 user 0m34.500s 00:26:49.343 sys 0m3.675s 00:26:49.343 12:46:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.343 12:46:31 -- common/autotest_common.sh@10 -- # set +x 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:26:49.343 12:46:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:26:49.343 12:46:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:49.343 12:46:31 -- common/autotest_common.sh@10 -- # set +x 00:26:49.343 ************************************ 00:26:49.343 START TEST raid5f_state_function_test 00:26:49.343 ************************************ 00:26:49.343 12:46:31 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=129700 00:26:49.343 Process raid pid: 129700 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129700' 00:26:49.343 12:46:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129700 /var/tmp/spdk-raid.sock 00:26:49.343 12:46:31 -- common/autotest_common.sh@819 -- # '[' -z 129700 ']' 00:26:49.343 12:46:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:49.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:49.343 12:46:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:49.343 12:46:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:49.343 12:46:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:49.343 12:46:31 -- common/autotest_common.sh@10 -- # set +x 00:26:49.344 12:46:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:49.344 [2024-10-01 12:46:31.861248] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:49.344 [2024-10-01 12:46:31.861418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.601 [2024-10-01 12:46:32.030532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.861 [2024-10-01 12:46:32.212105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.861 [2024-10-01 12:46:32.392258] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:50.427 12:46:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:50.427 12:46:32 -- common/autotest_common.sh@852 -- # return 0 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:50.427 [2024-10-01 12:46:32.796689] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:50.427 [2024-10-01 12:46:32.796781] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:50.427 [2024-10-01 12:46:32.796792] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:50.427 [2024-10-01 12:46:32.796814] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:50.427 [2024-10-01 12:46:32.796820] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:50.427 [2024-10-01 12:46:32.796858] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:50.427 [2024-10-01 12:46:32.796865] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:50.427 [2024-10-01 12:46:32.796888] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.427 12:46:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.685 12:46:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:50.685 "name": "Existed_Raid", 00:26:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.685 "strip_size_kb": 64, 00:26:50.685 "state": "configuring", 00:26:50.685 "raid_level": "raid5f", 00:26:50.685 "superblock": false, 00:26:50.685 "num_base_bdevs": 4, 00:26:50.685 "num_base_bdevs_discovered": 0, 00:26:50.685 "num_base_bdevs_operational": 4, 00:26:50.685 "base_bdevs_list": [ 00:26:50.685 { 00:26:50.685 "name": "BaseBdev1", 00:26:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.685 "is_configured": false, 00:26:50.685 "data_offset": 0, 00:26:50.685 "data_size": 0 00:26:50.685 }, 00:26:50.685 { 00:26:50.685 "name": "BaseBdev2", 00:26:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.685 "is_configured": false, 00:26:50.685 "data_offset": 0, 00:26:50.685 "data_size": 0 00:26:50.685 }, 00:26:50.685 { 00:26:50.685 "name": "BaseBdev3", 00:26:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.685 "is_configured": false, 00:26:50.685 "data_offset": 0, 00:26:50.685 "data_size": 0 00:26:50.685 }, 00:26:50.685 { 00:26:50.685 "name": "BaseBdev4", 00:26:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.685 "is_configured": false, 00:26:50.685 "data_offset": 0, 00:26:50.685 "data_size": 0 00:26:50.685 } 00:26:50.685 ] 00:26:50.685 }' 00:26:50.685 12:46:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:50.685 12:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:51.252 12:46:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:51.252 [2024-10-01 12:46:33.687260] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:51.252 [2024-10-01 12:46:33.687303] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:26:51.252 12:46:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:51.513 [2024-10-01 12:46:33.871030] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:51.513 [2024-10-01 12:46:33.871107] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:51.513 [2024-10-01 12:46:33.871115] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:51.513 [2024-10-01 12:46:33.871139] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:51.513 [2024-10-01 12:46:33.871146] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:51.513 [2024-10-01 12:46:33.871181] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:51.513 [2024-10-01 12:46:33.871187] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:51.513 [2024-10-01 12:46:33.871210] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:51.513 12:46:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:51.771 [2024-10-01 12:46:34.084408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:51.771 BaseBdev1 00:26:51.771 12:46:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:26:51.771 12:46:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:26:51.771 12:46:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:26:51.771 12:46:34 -- common/autotest_common.sh@889 -- # local i 00:26:51.771 12:46:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:26:51.771 12:46:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:26:51.771 12:46:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:51.771 12:46:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:52.029 [ 00:26:52.029 { 00:26:52.029 "name": "BaseBdev1", 00:26:52.029 "aliases": [ 00:26:52.029 "3ceda460-3d76-45f3-92ec-81615edfacd7" 00:26:52.029 ], 00:26:52.029 "product_name": "Malloc disk", 00:26:52.029 "block_size": 512, 00:26:52.029 "num_blocks": 65536, 00:26:52.029 "uuid": "3ceda460-3d76-45f3-92ec-81615edfacd7", 00:26:52.029 "assigned_rate_limits": { 00:26:52.029 "rw_ios_per_sec": 0, 00:26:52.029 "rw_mbytes_per_sec": 0, 00:26:52.029 "r_mbytes_per_sec": 0, 00:26:52.029 "w_mbytes_per_sec": 0 00:26:52.029 }, 00:26:52.029 "claimed": true, 00:26:52.029 "claim_type": "exclusive_write", 00:26:52.029 "zoned": false, 00:26:52.029 "supported_io_types": { 00:26:52.029 "read": true, 00:26:52.029 "write": true, 00:26:52.029 "unmap": true, 00:26:52.029 "write_zeroes": true, 00:26:52.029 "flush": true, 00:26:52.029 "reset": true, 00:26:52.029 "compare": false, 00:26:52.029 "compare_and_write": false, 00:26:52.029 "abort": true, 00:26:52.029 "nvme_admin": false, 00:26:52.029 "nvme_io": false 00:26:52.029 }, 00:26:52.029 "memory_domains": [ 00:26:52.029 { 00:26:52.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.029 "dma_device_type": 2 00:26:52.029 } 00:26:52.029 ], 00:26:52.029 "driver_specific": {} 00:26:52.029 } 00:26:52.029 ] 00:26:52.029 12:46:34 -- common/autotest_common.sh@895 -- # return 0 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:52.029 12:46:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.030 12:46:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:52.288 12:46:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:52.288 "name": "Existed_Raid", 00:26:52.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.288 "strip_size_kb": 64, 00:26:52.288 "state": "configuring", 00:26:52.288 "raid_level": "raid5f", 00:26:52.288 "superblock": false, 00:26:52.288 "num_base_bdevs": 4, 00:26:52.288 "num_base_bdevs_discovered": 1, 00:26:52.288 "num_base_bdevs_operational": 4, 00:26:52.288 "base_bdevs_list": [ 00:26:52.288 { 00:26:52.288 "name": "BaseBdev1", 00:26:52.288 "uuid": "3ceda460-3d76-45f3-92ec-81615edfacd7", 00:26:52.288 "is_configured": true, 00:26:52.288 "data_offset": 0, 00:26:52.288 "data_size": 65536 00:26:52.288 }, 00:26:52.288 { 00:26:52.288 "name": "BaseBdev2", 00:26:52.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.288 "is_configured": false, 00:26:52.288 "data_offset": 0, 00:26:52.288 "data_size": 0 00:26:52.288 }, 00:26:52.288 { 00:26:52.288 "name": "BaseBdev3", 00:26:52.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.288 "is_configured": false, 00:26:52.288 "data_offset": 0, 00:26:52.288 "data_size": 0 00:26:52.288 }, 00:26:52.288 { 00:26:52.288 "name": "BaseBdev4", 00:26:52.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.288 "is_configured": false, 00:26:52.288 "data_offset": 0, 00:26:52.288 "data_size": 0 00:26:52.288 } 00:26:52.288 ] 00:26:52.288 }' 00:26:52.288 12:46:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:52.288 12:46:34 -- common/autotest_common.sh@10 -- # set +x 00:26:52.854 12:46:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:52.854 [2024-10-01 12:46:35.338759] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:52.854 [2024-10-01 12:46:35.338832] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:26:52.854 12:46:35 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:26:52.854 12:46:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:53.112 [2024-10-01 12:46:35.514603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:53.112 [2024-10-01 12:46:35.516839] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:53.112 [2024-10-01 12:46:35.516920] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:53.112 [2024-10-01 12:46:35.516930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:53.112 [2024-10-01 12:46:35.516953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:53.112 [2024-10-01 12:46:35.516959] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:53.112 [2024-10-01 12:46:35.516975] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.112 12:46:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:53.370 12:46:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:53.370 "name": "Existed_Raid", 00:26:53.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.370 "strip_size_kb": 64, 00:26:53.370 "state": "configuring", 00:26:53.370 "raid_level": "raid5f", 00:26:53.370 "superblock": false, 00:26:53.371 "num_base_bdevs": 4, 00:26:53.371 "num_base_bdevs_discovered": 1, 00:26:53.371 "num_base_bdevs_operational": 4, 00:26:53.371 "base_bdevs_list": [ 00:26:53.371 { 00:26:53.371 "name": "BaseBdev1", 00:26:53.371 "uuid": "3ceda460-3d76-45f3-92ec-81615edfacd7", 00:26:53.371 "is_configured": true, 00:26:53.371 "data_offset": 0, 00:26:53.371 "data_size": 65536 00:26:53.371 }, 00:26:53.371 { 00:26:53.371 "name": "BaseBdev2", 00:26:53.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.371 "is_configured": false, 00:26:53.371 "data_offset": 0, 00:26:53.371 "data_size": 0 00:26:53.371 }, 00:26:53.371 { 00:26:53.371 "name": "BaseBdev3", 00:26:53.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.371 "is_configured": false, 00:26:53.371 "data_offset": 0, 00:26:53.371 "data_size": 0 00:26:53.371 }, 00:26:53.371 { 00:26:53.371 "name": "BaseBdev4", 00:26:53.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.371 "is_configured": false, 00:26:53.371 "data_offset": 0, 00:26:53.371 "data_size": 0 00:26:53.371 } 00:26:53.371 ] 00:26:53.371 }' 00:26:53.371 12:46:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:53.371 12:46:35 -- common/autotest_common.sh@10 -- # set +x 00:26:53.938 12:46:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:53.938 [2024-10-01 12:46:36.459091] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:53.938 BaseBdev2 00:26:54.197 12:46:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:26:54.197 12:46:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:26:54.197 12:46:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:26:54.197 12:46:36 -- common/autotest_common.sh@889 -- # local i 00:26:54.197 12:46:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:26:54.197 12:46:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:26:54.197 12:46:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:54.197 12:46:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:54.455 [ 00:26:54.455 { 00:26:54.455 "name": "BaseBdev2", 00:26:54.455 "aliases": [ 00:26:54.455 "8f0b077f-45bb-463d-a2a6-deaf7b8834a9" 00:26:54.455 ], 00:26:54.455 "product_name": "Malloc disk", 00:26:54.455 "block_size": 512, 00:26:54.455 "num_blocks": 65536, 00:26:54.455 "uuid": "8f0b077f-45bb-463d-a2a6-deaf7b8834a9", 00:26:54.455 "assigned_rate_limits": { 00:26:54.455 "rw_ios_per_sec": 0, 00:26:54.455 "rw_mbytes_per_sec": 0, 00:26:54.455 "r_mbytes_per_sec": 0, 00:26:54.455 "w_mbytes_per_sec": 0 00:26:54.455 }, 00:26:54.455 "claimed": true, 00:26:54.455 "claim_type": "exclusive_write", 00:26:54.455 "zoned": false, 00:26:54.455 "supported_io_types": { 00:26:54.455 "read": true, 00:26:54.455 "write": true, 00:26:54.455 "unmap": true, 00:26:54.455 "write_zeroes": true, 00:26:54.455 "flush": true, 00:26:54.455 "reset": true, 00:26:54.455 "compare": false, 00:26:54.455 "compare_and_write": false, 00:26:54.455 "abort": true, 00:26:54.455 "nvme_admin": false, 00:26:54.455 "nvme_io": false 00:26:54.455 }, 00:26:54.455 "memory_domains": [ 00:26:54.455 { 00:26:54.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.455 "dma_device_type": 2 00:26:54.455 } 00:26:54.455 ], 00:26:54.455 "driver_specific": {} 00:26:54.455 } 00:26:54.455 ] 00:26:54.455 12:46:36 -- common/autotest_common.sh@895 -- # return 0 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.455 12:46:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:54.714 12:46:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:54.714 "name": "Existed_Raid", 00:26:54.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.714 "strip_size_kb": 64, 00:26:54.714 "state": "configuring", 00:26:54.714 "raid_level": "raid5f", 00:26:54.714 "superblock": false, 00:26:54.714 "num_base_bdevs": 4, 00:26:54.714 "num_base_bdevs_discovered": 2, 00:26:54.714 "num_base_bdevs_operational": 4, 00:26:54.714 "base_bdevs_list": [ 00:26:54.714 { 00:26:54.714 "name": "BaseBdev1", 00:26:54.714 "uuid": "3ceda460-3d76-45f3-92ec-81615edfacd7", 00:26:54.714 "is_configured": true, 00:26:54.714 "data_offset": 0, 00:26:54.714 "data_size": 65536 00:26:54.714 }, 00:26:54.714 { 00:26:54.714 "name": "BaseBdev2", 00:26:54.714 "uuid": "8f0b077f-45bb-463d-a2a6-deaf7b8834a9", 00:26:54.714 "is_configured": true, 00:26:54.714 "data_offset": 0, 00:26:54.714 "data_size": 65536 00:26:54.714 }, 00:26:54.714 { 00:26:54.714 "name": "BaseBdev3", 00:26:54.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.714 "is_configured": false, 00:26:54.714 "data_offset": 0, 00:26:54.714 "data_size": 0 00:26:54.714 }, 00:26:54.714 { 00:26:54.714 "name": "BaseBdev4", 00:26:54.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.714 "is_configured": false, 00:26:54.714 "data_offset": 0, 00:26:54.714 "data_size": 0 00:26:54.714 } 00:26:54.714 ] 00:26:54.714 }' 00:26:54.714 12:46:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:54.714 12:46:37 -- common/autotest_common.sh@10 -- # set +x 00:26:55.284 12:46:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:55.284 [2024-10-01 12:46:37.753644] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:55.284 BaseBdev3 00:26:55.284 12:46:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:26:55.284 12:46:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:26:55.284 12:46:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:26:55.284 12:46:37 -- common/autotest_common.sh@889 -- # local i 00:26:55.284 12:46:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:26:55.284 12:46:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:26:55.284 12:46:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:55.543 12:46:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:55.803 [ 00:26:55.803 { 00:26:55.803 "name": "BaseBdev3", 00:26:55.803 "aliases": [ 00:26:55.803 "cfb8e6b1-cd68-463a-90f2-5eaf64266762" 00:26:55.803 ], 00:26:55.803 "product_name": "Malloc disk", 00:26:55.803 "block_size": 512, 00:26:55.803 "num_blocks": 65536, 00:26:55.803 "uuid": "cfb8e6b1-cd68-463a-90f2-5eaf64266762", 00:26:55.803 "assigned_rate_limits": { 00:26:55.803 "rw_ios_per_sec": 0, 00:26:55.803 "rw_mbytes_per_sec": 0, 00:26:55.803 "r_mbytes_per_sec": 0, 00:26:55.803 "w_mbytes_per_sec": 0 00:26:55.803 }, 00:26:55.803 "claimed": true, 00:26:55.803 "claim_type": "exclusive_write", 00:26:55.803 "zoned": false, 00:26:55.803 "supported_io_types": { 00:26:55.803 "read": true, 00:26:55.803 "write": true, 00:26:55.803 "unmap": true, 00:26:55.803 "write_zeroes": true, 00:26:55.803 "flush": true, 00:26:55.803 "reset": true, 00:26:55.803 "compare": false, 00:26:55.803 "compare_and_write": false, 00:26:55.803 "abort": true, 00:26:55.803 "nvme_admin": false, 00:26:55.803 "nvme_io": false 00:26:55.803 }, 00:26:55.803 "memory_domains": [ 00:26:55.803 { 00:26:55.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.803 "dma_device_type": 2 00:26:55.803 } 00:26:55.803 ], 00:26:55.803 "driver_specific": {} 00:26:55.803 } 00:26:55.803 ] 00:26:55.803 12:46:38 -- common/autotest_common.sh@895 -- # return 0 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:55.803 12:46:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.062 12:46:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:56.062 "name": "Existed_Raid", 00:26:56.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.062 "strip_size_kb": 64, 00:26:56.062 "state": "configuring", 00:26:56.062 "raid_level": "raid5f", 00:26:56.062 "superblock": false, 00:26:56.062 "num_base_bdevs": 4, 00:26:56.062 "num_base_bdevs_discovered": 3, 00:26:56.062 "num_base_bdevs_operational": 4, 00:26:56.062 "base_bdevs_list": [ 00:26:56.062 { 00:26:56.062 "name": "BaseBdev1", 00:26:56.062 "uuid": "3ceda460-3d76-45f3-92ec-81615edfacd7", 00:26:56.062 "is_configured": true, 00:26:56.062 "data_offset": 0, 00:26:56.062 "data_size": 65536 00:26:56.062 }, 00:26:56.062 { 00:26:56.062 "name": "BaseBdev2", 00:26:56.062 "uuid": "8f0b077f-45bb-463d-a2a6-deaf7b8834a9", 00:26:56.062 "is_configured": true, 00:26:56.062 "data_offset": 0, 00:26:56.062 "data_size": 65536 00:26:56.062 }, 00:26:56.062 { 00:26:56.062 "name": "BaseBdev3", 00:26:56.062 "uuid": "cfb8e6b1-cd68-463a-90f2-5eaf64266762", 00:26:56.062 "is_configured": true, 00:26:56.062 "data_offset": 0, 00:26:56.062 "data_size": 65536 00:26:56.062 }, 00:26:56.062 { 00:26:56.062 "name": "BaseBdev4", 00:26:56.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.062 "is_configured": false, 00:26:56.062 "data_offset": 0, 00:26:56.062 "data_size": 0 00:26:56.062 } 00:26:56.062 ] 00:26:56.062 }' 00:26:56.062 12:46:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:56.062 12:46:38 -- common/autotest_common.sh@10 -- # set +x 00:26:56.638 12:46:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:56.638 [2024-10-01 12:46:39.154858] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:56.638 [2024-10-01 12:46:39.154956] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:26:56.638 [2024-10-01 12:46:39.154965] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:56.638 [2024-10-01 12:46:39.155109] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:26:56.638 [2024-10-01 12:46:39.160491] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:26:56.638 [2024-10-01 12:46:39.160519] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:26:56.638 [2024-10-01 12:46:39.160803] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:56.638 BaseBdev4 00:26:56.900 12:46:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:26:56.900 12:46:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:26:56.900 12:46:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:26:56.900 12:46:39 -- common/autotest_common.sh@889 -- # local i 00:26:56.900 12:46:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:26:56.900 12:46:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:26:56.900 12:46:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:56.900 12:46:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:57.161 [ 00:26:57.161 { 00:26:57.161 "name": "BaseBdev4", 00:26:57.161 "aliases": [ 00:26:57.161 "227e706d-aabb-42d1-8077-5d8548c9e96e" 00:26:57.161 ], 00:26:57.161 "product_name": "Malloc disk", 00:26:57.161 "block_size": 512, 00:26:57.161 "num_blocks": 65536, 00:26:57.161 "uuid": "227e706d-aabb-42d1-8077-5d8548c9e96e", 00:26:57.161 "assigned_rate_limits": { 00:26:57.161 "rw_ios_per_sec": 0, 00:26:57.161 "rw_mbytes_per_sec": 0, 00:26:57.161 "r_mbytes_per_sec": 0, 00:26:57.161 "w_mbytes_per_sec": 0 00:26:57.161 }, 00:26:57.161 "claimed": true, 00:26:57.161 "claim_type": "exclusive_write", 00:26:57.161 "zoned": false, 00:26:57.161 "supported_io_types": { 00:26:57.161 "read": true, 00:26:57.161 "write": true, 00:26:57.161 "unmap": true, 00:26:57.161 "write_zeroes": true, 00:26:57.161 "flush": true, 00:26:57.161 "reset": true, 00:26:57.161 "compare": false, 00:26:57.161 "compare_and_write": false, 00:26:57.161 "abort": true, 00:26:57.161 "nvme_admin": false, 00:26:57.161 "nvme_io": false 00:26:57.161 }, 00:26:57.161 "memory_domains": [ 00:26:57.161 { 00:26:57.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.161 "dma_device_type": 2 00:26:57.161 } 00:26:57.161 ], 00:26:57.161 "driver_specific": {} 00:26:57.161 } 00:26:57.161 ] 00:26:57.161 12:46:39 -- common/autotest_common.sh@895 -- # return 0 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.161 12:46:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:57.420 12:46:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:57.420 "name": "Existed_Raid", 00:26:57.420 "uuid": "ce62443a-2740-493f-ad43-9670bec5ed86", 00:26:57.420 "strip_size_kb": 64, 00:26:57.420 "state": "online", 00:26:57.420 "raid_level": "raid5f", 00:26:57.420 "superblock": false, 00:26:57.420 "num_base_bdevs": 4, 00:26:57.420 "num_base_bdevs_discovered": 4, 00:26:57.420 "num_base_bdevs_operational": 4, 00:26:57.420 "base_bdevs_list": [ 00:26:57.420 { 00:26:57.420 "name": "BaseBdev1", 00:26:57.420 "uuid": "3ceda460-3d76-45f3-92ec-81615edfacd7", 00:26:57.420 "is_configured": true, 00:26:57.420 "data_offset": 0, 00:26:57.420 "data_size": 65536 00:26:57.420 }, 00:26:57.420 { 00:26:57.420 "name": "BaseBdev2", 00:26:57.420 "uuid": "8f0b077f-45bb-463d-a2a6-deaf7b8834a9", 00:26:57.420 "is_configured": true, 00:26:57.420 "data_offset": 0, 00:26:57.420 "data_size": 65536 00:26:57.420 }, 00:26:57.420 { 00:26:57.420 "name": "BaseBdev3", 00:26:57.420 "uuid": "cfb8e6b1-cd68-463a-90f2-5eaf64266762", 00:26:57.420 "is_configured": true, 00:26:57.420 "data_offset": 0, 00:26:57.420 "data_size": 65536 00:26:57.420 }, 00:26:57.420 { 00:26:57.420 "name": "BaseBdev4", 00:26:57.420 "uuid": "227e706d-aabb-42d1-8077-5d8548c9e96e", 00:26:57.420 "is_configured": true, 00:26:57.420 "data_offset": 0, 00:26:57.420 "data_size": 65536 00:26:57.420 } 00:26:57.420 ] 00:26:57.420 }' 00:26:57.420 12:46:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:57.420 12:46:39 -- common/autotest_common.sh@10 -- # set +x 00:26:57.987 12:46:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:57.987 [2024-10-01 12:46:40.443378] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.247 12:46:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:58.505 12:46:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:58.505 "name": "Existed_Raid", 00:26:58.505 "uuid": "ce62443a-2740-493f-ad43-9670bec5ed86", 00:26:58.505 "strip_size_kb": 64, 00:26:58.505 "state": "online", 00:26:58.505 "raid_level": "raid5f", 00:26:58.505 "superblock": false, 00:26:58.505 "num_base_bdevs": 4, 00:26:58.505 "num_base_bdevs_discovered": 3, 00:26:58.505 "num_base_bdevs_operational": 3, 00:26:58.505 "base_bdevs_list": [ 00:26:58.505 { 00:26:58.505 "name": null, 00:26:58.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.505 "is_configured": false, 00:26:58.505 "data_offset": 0, 00:26:58.505 "data_size": 65536 00:26:58.505 }, 00:26:58.505 { 00:26:58.505 "name": "BaseBdev2", 00:26:58.505 "uuid": "8f0b077f-45bb-463d-a2a6-deaf7b8834a9", 00:26:58.505 "is_configured": true, 00:26:58.505 "data_offset": 0, 00:26:58.505 "data_size": 65536 00:26:58.505 }, 00:26:58.505 { 00:26:58.505 "name": "BaseBdev3", 00:26:58.505 "uuid": "cfb8e6b1-cd68-463a-90f2-5eaf64266762", 00:26:58.505 "is_configured": true, 00:26:58.505 "data_offset": 0, 00:26:58.505 "data_size": 65536 00:26:58.505 }, 00:26:58.505 { 00:26:58.505 "name": "BaseBdev4", 00:26:58.505 "uuid": "227e706d-aabb-42d1-8077-5d8548c9e96e", 00:26:58.505 "is_configured": true, 00:26:58.505 "data_offset": 0, 00:26:58.505 "data_size": 65536 00:26:58.505 } 00:26:58.505 ] 00:26:58.505 }' 00:26:58.506 12:46:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:58.506 12:46:40 -- common/autotest_common.sh@10 -- # set +x 00:26:59.073 12:46:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:59.073 12:46:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:59.073 12:46:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.073 12:46:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:59.073 12:46:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:59.073 12:46:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:59.073 12:46:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:59.333 [2024-10-01 12:46:41.639422] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:59.333 [2024-10-01 12:46:41.639458] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:59.333 [2024-10-01 12:46:41.639513] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:59.333 12:46:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:59.333 12:46:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:59.333 12:46:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.333 12:46:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:59.593 12:46:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:59.593 12:46:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:59.593 12:46:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:59.593 [2024-10-01 12:46:42.096161] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:59.852 12:46:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:59.852 12:46:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:59.852 12:46:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.852 12:46:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:00.111 12:46:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:00.111 12:46:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:00.111 12:46:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:00.111 [2024-10-01 12:46:42.552494] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:00.111 [2024-10-01 12:46:42.552581] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:27:00.370 12:46:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:00.370 12:46:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:00.370 12:46:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.370 12:46:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:27:00.370 12:46:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:27:00.370 12:46:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:27:00.370 12:46:42 -- bdev/bdev_raid.sh@287 -- # killprocess 129700 00:27:00.370 12:46:42 -- common/autotest_common.sh@926 -- # '[' -z 129700 ']' 00:27:00.370 12:46:42 -- common/autotest_common.sh@930 -- # kill -0 129700 00:27:00.370 12:46:42 -- common/autotest_common.sh@931 -- # uname 00:27:00.370 12:46:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:00.370 12:46:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129700 00:27:00.370 12:46:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:00.370 12:46:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:00.370 killing process with pid 129700 00:27:00.370 12:46:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129700' 00:27:00.370 12:46:42 -- common/autotest_common.sh@945 -- # kill 129700 00:27:00.370 [2024-10-01 12:46:42.872450] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:00.370 12:46:42 -- common/autotest_common.sh@950 -- # wait 129700 00:27:00.370 [2024-10-01 12:46:42.872572] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:01.746 12:46:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:27:01.746 00:27:01.746 real 0m12.295s 00:27:01.746 user 0m20.867s 00:27:01.747 sys 0m2.012s 00:27:01.747 12:46:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.747 12:46:44 -- common/autotest_common.sh@10 -- # set +x 00:27:01.747 ************************************ 00:27:01.747 END TEST raid5f_state_function_test 00:27:01.747 ************************************ 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:27:01.747 12:46:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:27:01.747 12:46:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:01.747 12:46:44 -- common/autotest_common.sh@10 -- # set +x 00:27:01.747 ************************************ 00:27:01.747 START TEST raid5f_state_function_test_sb 00:27:01.747 ************************************ 00:27:01.747 12:46:44 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=130115 00:27:01.747 Process raid pid: 130115 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130115' 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:01.747 12:46:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130115 /var/tmp/spdk-raid.sock 00:27:01.747 12:46:44 -- common/autotest_common.sh@819 -- # '[' -z 130115 ']' 00:27:01.747 12:46:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:01.747 12:46:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:01.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:01.747 12:46:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:01.747 12:46:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:01.747 12:46:44 -- common/autotest_common.sh@10 -- # set +x 00:27:01.747 [2024-10-01 12:46:44.246706] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:01.747 [2024-10-01 12:46:44.246871] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.006 [2024-10-01 12:46:44.418327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.264 [2024-10-01 12:46:44.643817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.523 [2024-10-01 12:46:44.831658] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:02.782 12:46:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:02.782 12:46:45 -- common/autotest_common.sh@852 -- # return 0 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:02.782 [2024-10-01 12:46:45.230402] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:02.782 [2024-10-01 12:46:45.230497] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:02.782 [2024-10-01 12:46:45.230508] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:02.782 [2024-10-01 12:46:45.230532] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:02.782 [2024-10-01 12:46:45.230538] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:02.782 [2024-10-01 12:46:45.230576] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:02.782 [2024-10-01 12:46:45.230583] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:02.782 [2024-10-01 12:46:45.230606] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.782 12:46:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.041 12:46:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:03.041 "name": "Existed_Raid", 00:27:03.041 "uuid": "77a18d34-2935-4ee9-8764-562dfebbb058", 00:27:03.041 "strip_size_kb": 64, 00:27:03.041 "state": "configuring", 00:27:03.041 "raid_level": "raid5f", 00:27:03.041 "superblock": true, 00:27:03.041 "num_base_bdevs": 4, 00:27:03.041 "num_base_bdevs_discovered": 0, 00:27:03.041 "num_base_bdevs_operational": 4, 00:27:03.041 "base_bdevs_list": [ 00:27:03.041 { 00:27:03.041 "name": "BaseBdev1", 00:27:03.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.041 "is_configured": false, 00:27:03.041 "data_offset": 0, 00:27:03.041 "data_size": 0 00:27:03.041 }, 00:27:03.041 { 00:27:03.041 "name": "BaseBdev2", 00:27:03.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.041 "is_configured": false, 00:27:03.041 "data_offset": 0, 00:27:03.041 "data_size": 0 00:27:03.041 }, 00:27:03.041 { 00:27:03.041 "name": "BaseBdev3", 00:27:03.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.042 "is_configured": false, 00:27:03.042 "data_offset": 0, 00:27:03.042 "data_size": 0 00:27:03.042 }, 00:27:03.042 { 00:27:03.042 "name": "BaseBdev4", 00:27:03.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.042 "is_configured": false, 00:27:03.042 "data_offset": 0, 00:27:03.042 "data_size": 0 00:27:03.042 } 00:27:03.042 ] 00:27:03.042 }' 00:27:03.042 12:46:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:03.042 12:46:45 -- common/autotest_common.sh@10 -- # set +x 00:27:03.609 12:46:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:03.868 [2024-10-01 12:46:46.192862] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:03.868 [2024-10-01 12:46:46.192918] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:27:03.868 12:46:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:03.868 [2024-10-01 12:46:46.380672] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:03.868 [2024-10-01 12:46:46.380763] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:03.868 [2024-10-01 12:46:46.380773] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:03.868 [2024-10-01 12:46:46.380799] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:03.868 [2024-10-01 12:46:46.380806] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:03.868 [2024-10-01 12:46:46.380843] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:03.868 [2024-10-01 12:46:46.380849] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:03.868 [2024-10-01 12:46:46.380872] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:03.868 12:46:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:04.127 [2024-10-01 12:46:46.606494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:04.127 BaseBdev1 00:27:04.127 12:46:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:27:04.127 12:46:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:27:04.127 12:46:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:27:04.127 12:46:46 -- common/autotest_common.sh@889 -- # local i 00:27:04.127 12:46:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:27:04.127 12:46:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:27:04.127 12:46:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:04.386 12:46:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:04.646 [ 00:27:04.646 { 00:27:04.646 "name": "BaseBdev1", 00:27:04.646 "aliases": [ 00:27:04.646 "a74ae8a3-f435-452a-802a-a26b69b5fc00" 00:27:04.646 ], 00:27:04.646 "product_name": "Malloc disk", 00:27:04.646 "block_size": 512, 00:27:04.646 "num_blocks": 65536, 00:27:04.646 "uuid": "a74ae8a3-f435-452a-802a-a26b69b5fc00", 00:27:04.646 "assigned_rate_limits": { 00:27:04.646 "rw_ios_per_sec": 0, 00:27:04.646 "rw_mbytes_per_sec": 0, 00:27:04.646 "r_mbytes_per_sec": 0, 00:27:04.646 "w_mbytes_per_sec": 0 00:27:04.646 }, 00:27:04.646 "claimed": true, 00:27:04.646 "claim_type": "exclusive_write", 00:27:04.646 "zoned": false, 00:27:04.646 "supported_io_types": { 00:27:04.646 "read": true, 00:27:04.646 "write": true, 00:27:04.646 "unmap": true, 00:27:04.646 "write_zeroes": true, 00:27:04.646 "flush": true, 00:27:04.646 "reset": true, 00:27:04.646 "compare": false, 00:27:04.646 "compare_and_write": false, 00:27:04.646 "abort": true, 00:27:04.646 "nvme_admin": false, 00:27:04.646 "nvme_io": false 00:27:04.646 }, 00:27:04.646 "memory_domains": [ 00:27:04.646 { 00:27:04.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.646 "dma_device_type": 2 00:27:04.646 } 00:27:04.646 ], 00:27:04.646 "driver_specific": {} 00:27:04.646 } 00:27:04.646 ] 00:27:04.646 12:46:46 -- common/autotest_common.sh@895 -- # return 0 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.646 12:46:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:04.905 12:46:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:04.905 "name": "Existed_Raid", 00:27:04.905 "uuid": "490ed865-3f49-4bdb-8baa-eaa03524a053", 00:27:04.905 "strip_size_kb": 64, 00:27:04.905 "state": "configuring", 00:27:04.905 "raid_level": "raid5f", 00:27:04.905 "superblock": true, 00:27:04.905 "num_base_bdevs": 4, 00:27:04.905 "num_base_bdevs_discovered": 1, 00:27:04.905 "num_base_bdevs_operational": 4, 00:27:04.905 "base_bdevs_list": [ 00:27:04.905 { 00:27:04.905 "name": "BaseBdev1", 00:27:04.905 "uuid": "a74ae8a3-f435-452a-802a-a26b69b5fc00", 00:27:04.905 "is_configured": true, 00:27:04.905 "data_offset": 2048, 00:27:04.905 "data_size": 63488 00:27:04.905 }, 00:27:04.905 { 00:27:04.905 "name": "BaseBdev2", 00:27:04.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.905 "is_configured": false, 00:27:04.905 "data_offset": 0, 00:27:04.905 "data_size": 0 00:27:04.905 }, 00:27:04.905 { 00:27:04.905 "name": "BaseBdev3", 00:27:04.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.905 "is_configured": false, 00:27:04.905 "data_offset": 0, 00:27:04.905 "data_size": 0 00:27:04.905 }, 00:27:04.905 { 00:27:04.905 "name": "BaseBdev4", 00:27:04.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.905 "is_configured": false, 00:27:04.905 "data_offset": 0, 00:27:04.905 "data_size": 0 00:27:04.905 } 00:27:04.905 ] 00:27:04.905 }' 00:27:04.905 12:46:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:04.905 12:46:47 -- common/autotest_common.sh@10 -- # set +x 00:27:05.473 12:46:47 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:05.473 [2024-10-01 12:46:47.888700] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:05.473 [2024-10-01 12:46:47.888796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:27:05.473 12:46:47 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:27:05.473 12:46:47 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:05.731 12:46:48 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:05.990 BaseBdev1 00:27:05.990 12:46:48 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:27:05.990 12:46:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:27:05.990 12:46:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:27:05.990 12:46:48 -- common/autotest_common.sh@889 -- # local i 00:27:05.990 12:46:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:27:05.990 12:46:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:27:05.990 12:46:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:06.248 12:46:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:06.248 [ 00:27:06.248 { 00:27:06.248 "name": "BaseBdev1", 00:27:06.248 "aliases": [ 00:27:06.248 "b2c50842-4e11-4091-a15f-c0fdfdbf61fe" 00:27:06.248 ], 00:27:06.248 "product_name": "Malloc disk", 00:27:06.248 "block_size": 512, 00:27:06.248 "num_blocks": 65536, 00:27:06.248 "uuid": "b2c50842-4e11-4091-a15f-c0fdfdbf61fe", 00:27:06.248 "assigned_rate_limits": { 00:27:06.248 "rw_ios_per_sec": 0, 00:27:06.248 "rw_mbytes_per_sec": 0, 00:27:06.248 "r_mbytes_per_sec": 0, 00:27:06.248 "w_mbytes_per_sec": 0 00:27:06.248 }, 00:27:06.248 "claimed": false, 00:27:06.248 "zoned": false, 00:27:06.248 "supported_io_types": { 00:27:06.248 "read": true, 00:27:06.248 "write": true, 00:27:06.248 "unmap": true, 00:27:06.248 "write_zeroes": true, 00:27:06.248 "flush": true, 00:27:06.248 "reset": true, 00:27:06.248 "compare": false, 00:27:06.248 "compare_and_write": false, 00:27:06.248 "abort": true, 00:27:06.248 "nvme_admin": false, 00:27:06.248 "nvme_io": false 00:27:06.248 }, 00:27:06.248 "memory_domains": [ 00:27:06.248 { 00:27:06.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.248 "dma_device_type": 2 00:27:06.248 } 00:27:06.248 ], 00:27:06.248 "driver_specific": {} 00:27:06.248 } 00:27:06.248 ] 00:27:06.506 12:46:48 -- common/autotest_common.sh@895 -- # return 0 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:06.507 [2024-10-01 12:46:48.938550] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:06.507 [2024-10-01 12:46:48.940836] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:06.507 [2024-10-01 12:46:48.940937] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:06.507 [2024-10-01 12:46:48.940948] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:06.507 [2024-10-01 12:46:48.940972] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:06.507 [2024-10-01 12:46:48.940980] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:06.507 [2024-10-01 12:46:48.940997] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.507 12:46:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:06.791 12:46:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:06.791 "name": "Existed_Raid", 00:27:06.791 "uuid": "5cac7600-0c01-4f99-9b5f-0acfc52b6b70", 00:27:06.791 "strip_size_kb": 64, 00:27:06.791 "state": "configuring", 00:27:06.791 "raid_level": "raid5f", 00:27:06.791 "superblock": true, 00:27:06.791 "num_base_bdevs": 4, 00:27:06.791 "num_base_bdevs_discovered": 1, 00:27:06.791 "num_base_bdevs_operational": 4, 00:27:06.791 "base_bdevs_list": [ 00:27:06.791 { 00:27:06.791 "name": "BaseBdev1", 00:27:06.791 "uuid": "b2c50842-4e11-4091-a15f-c0fdfdbf61fe", 00:27:06.791 "is_configured": true, 00:27:06.791 "data_offset": 2048, 00:27:06.791 "data_size": 63488 00:27:06.791 }, 00:27:06.791 { 00:27:06.791 "name": "BaseBdev2", 00:27:06.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.791 "is_configured": false, 00:27:06.791 "data_offset": 0, 00:27:06.791 "data_size": 0 00:27:06.791 }, 00:27:06.791 { 00:27:06.791 "name": "BaseBdev3", 00:27:06.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.791 "is_configured": false, 00:27:06.791 "data_offset": 0, 00:27:06.791 "data_size": 0 00:27:06.791 }, 00:27:06.791 { 00:27:06.791 "name": "BaseBdev4", 00:27:06.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.791 "is_configured": false, 00:27:06.791 "data_offset": 0, 00:27:06.791 "data_size": 0 00:27:06.791 } 00:27:06.791 ] 00:27:06.791 }' 00:27:06.791 12:46:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:06.791 12:46:49 -- common/autotest_common.sh@10 -- # set +x 00:27:07.357 12:46:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:07.616 [2024-10-01 12:46:49.897487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:07.616 BaseBdev2 00:27:07.616 12:46:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:27:07.616 12:46:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:27:07.616 12:46:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:27:07.616 12:46:49 -- common/autotest_common.sh@889 -- # local i 00:27:07.616 12:46:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:27:07.616 12:46:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:27:07.616 12:46:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:07.616 12:46:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:07.875 [ 00:27:07.875 { 00:27:07.875 "name": "BaseBdev2", 00:27:07.875 "aliases": [ 00:27:07.875 "9d537d2b-0066-411f-9c3d-e4b190f33da2" 00:27:07.875 ], 00:27:07.875 "product_name": "Malloc disk", 00:27:07.875 "block_size": 512, 00:27:07.875 "num_blocks": 65536, 00:27:07.875 "uuid": "9d537d2b-0066-411f-9c3d-e4b190f33da2", 00:27:07.875 "assigned_rate_limits": { 00:27:07.875 "rw_ios_per_sec": 0, 00:27:07.875 "rw_mbytes_per_sec": 0, 00:27:07.875 "r_mbytes_per_sec": 0, 00:27:07.875 "w_mbytes_per_sec": 0 00:27:07.875 }, 00:27:07.875 "claimed": true, 00:27:07.875 "claim_type": "exclusive_write", 00:27:07.875 "zoned": false, 00:27:07.875 "supported_io_types": { 00:27:07.875 "read": true, 00:27:07.875 "write": true, 00:27:07.875 "unmap": true, 00:27:07.875 "write_zeroes": true, 00:27:07.875 "flush": true, 00:27:07.875 "reset": true, 00:27:07.875 "compare": false, 00:27:07.875 "compare_and_write": false, 00:27:07.875 "abort": true, 00:27:07.875 "nvme_admin": false, 00:27:07.875 "nvme_io": false 00:27:07.875 }, 00:27:07.875 "memory_domains": [ 00:27:07.875 { 00:27:07.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.875 "dma_device_type": 2 00:27:07.875 } 00:27:07.875 ], 00:27:07.875 "driver_specific": {} 00:27:07.875 } 00:27:07.875 ] 00:27:07.875 12:46:50 -- common/autotest_common.sh@895 -- # return 0 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.875 12:46:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:08.135 12:46:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:08.135 "name": "Existed_Raid", 00:27:08.135 "uuid": "5cac7600-0c01-4f99-9b5f-0acfc52b6b70", 00:27:08.135 "strip_size_kb": 64, 00:27:08.135 "state": "configuring", 00:27:08.135 "raid_level": "raid5f", 00:27:08.135 "superblock": true, 00:27:08.135 "num_base_bdevs": 4, 00:27:08.135 "num_base_bdevs_discovered": 2, 00:27:08.135 "num_base_bdevs_operational": 4, 00:27:08.135 "base_bdevs_list": [ 00:27:08.135 { 00:27:08.135 "name": "BaseBdev1", 00:27:08.135 "uuid": "b2c50842-4e11-4091-a15f-c0fdfdbf61fe", 00:27:08.135 "is_configured": true, 00:27:08.135 "data_offset": 2048, 00:27:08.135 "data_size": 63488 00:27:08.135 }, 00:27:08.135 { 00:27:08.135 "name": "BaseBdev2", 00:27:08.135 "uuid": "9d537d2b-0066-411f-9c3d-e4b190f33da2", 00:27:08.135 "is_configured": true, 00:27:08.135 "data_offset": 2048, 00:27:08.135 "data_size": 63488 00:27:08.135 }, 00:27:08.135 { 00:27:08.135 "name": "BaseBdev3", 00:27:08.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.135 "is_configured": false, 00:27:08.135 "data_offset": 0, 00:27:08.135 "data_size": 0 00:27:08.135 }, 00:27:08.135 { 00:27:08.135 "name": "BaseBdev4", 00:27:08.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.135 "is_configured": false, 00:27:08.135 "data_offset": 0, 00:27:08.135 "data_size": 0 00:27:08.135 } 00:27:08.135 ] 00:27:08.135 }' 00:27:08.135 12:46:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:08.135 12:46:50 -- common/autotest_common.sh@10 -- # set +x 00:27:08.704 12:46:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:08.964 [2024-10-01 12:46:51.248579] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:08.964 BaseBdev3 00:27:08.964 12:46:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:27:08.964 12:46:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:27:08.964 12:46:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:27:08.964 12:46:51 -- common/autotest_common.sh@889 -- # local i 00:27:08.964 12:46:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:27:08.964 12:46:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:27:08.964 12:46:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:08.964 12:46:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:09.225 [ 00:27:09.225 { 00:27:09.225 "name": "BaseBdev3", 00:27:09.225 "aliases": [ 00:27:09.225 "63b6c626-1e2c-49e0-b2b8-4e591f5e1c6f" 00:27:09.225 ], 00:27:09.225 "product_name": "Malloc disk", 00:27:09.225 "block_size": 512, 00:27:09.225 "num_blocks": 65536, 00:27:09.225 "uuid": "63b6c626-1e2c-49e0-b2b8-4e591f5e1c6f", 00:27:09.225 "assigned_rate_limits": { 00:27:09.225 "rw_ios_per_sec": 0, 00:27:09.225 "rw_mbytes_per_sec": 0, 00:27:09.225 "r_mbytes_per_sec": 0, 00:27:09.225 "w_mbytes_per_sec": 0 00:27:09.225 }, 00:27:09.225 "claimed": true, 00:27:09.225 "claim_type": "exclusive_write", 00:27:09.225 "zoned": false, 00:27:09.225 "supported_io_types": { 00:27:09.225 "read": true, 00:27:09.225 "write": true, 00:27:09.225 "unmap": true, 00:27:09.225 "write_zeroes": true, 00:27:09.225 "flush": true, 00:27:09.225 "reset": true, 00:27:09.225 "compare": false, 00:27:09.225 "compare_and_write": false, 00:27:09.225 "abort": true, 00:27:09.225 "nvme_admin": false, 00:27:09.225 "nvme_io": false 00:27:09.225 }, 00:27:09.225 "memory_domains": [ 00:27:09.225 { 00:27:09.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.225 "dma_device_type": 2 00:27:09.225 } 00:27:09.225 ], 00:27:09.225 "driver_specific": {} 00:27:09.225 } 00:27:09.225 ] 00:27:09.225 12:46:51 -- common/autotest_common.sh@895 -- # return 0 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.225 12:46:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:09.485 12:46:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:09.485 "name": "Existed_Raid", 00:27:09.485 "uuid": "5cac7600-0c01-4f99-9b5f-0acfc52b6b70", 00:27:09.485 "strip_size_kb": 64, 00:27:09.485 "state": "configuring", 00:27:09.485 "raid_level": "raid5f", 00:27:09.485 "superblock": true, 00:27:09.485 "num_base_bdevs": 4, 00:27:09.485 "num_base_bdevs_discovered": 3, 00:27:09.485 "num_base_bdevs_operational": 4, 00:27:09.485 "base_bdevs_list": [ 00:27:09.485 { 00:27:09.485 "name": "BaseBdev1", 00:27:09.485 "uuid": "b2c50842-4e11-4091-a15f-c0fdfdbf61fe", 00:27:09.485 "is_configured": true, 00:27:09.485 "data_offset": 2048, 00:27:09.485 "data_size": 63488 00:27:09.485 }, 00:27:09.485 { 00:27:09.485 "name": "BaseBdev2", 00:27:09.485 "uuid": "9d537d2b-0066-411f-9c3d-e4b190f33da2", 00:27:09.485 "is_configured": true, 00:27:09.485 "data_offset": 2048, 00:27:09.485 "data_size": 63488 00:27:09.485 }, 00:27:09.485 { 00:27:09.485 "name": "BaseBdev3", 00:27:09.485 "uuid": "63b6c626-1e2c-49e0-b2b8-4e591f5e1c6f", 00:27:09.485 "is_configured": true, 00:27:09.485 "data_offset": 2048, 00:27:09.485 "data_size": 63488 00:27:09.485 }, 00:27:09.485 { 00:27:09.485 "name": "BaseBdev4", 00:27:09.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.485 "is_configured": false, 00:27:09.485 "data_offset": 0, 00:27:09.485 "data_size": 0 00:27:09.485 } 00:27:09.485 ] 00:27:09.485 }' 00:27:09.485 12:46:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:09.485 12:46:51 -- common/autotest_common.sh@10 -- # set +x 00:27:10.055 12:46:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:10.055 [2024-10-01 12:46:52.587525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:10.055 [2024-10-01 12:46:52.587796] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:27:10.055 [2024-10-01 12:46:52.587809] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:10.055 [2024-10-01 12:46:52.587946] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:27:10.315 BaseBdev4 00:27:10.315 [2024-10-01 12:46:52.593594] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:27:10.315 [2024-10-01 12:46:52.593623] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:27:10.315 [2024-10-01 12:46:52.593812] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.315 12:46:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:27:10.315 12:46:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:27:10.315 12:46:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:27:10.315 12:46:52 -- common/autotest_common.sh@889 -- # local i 00:27:10.315 12:46:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:27:10.315 12:46:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:27:10.315 12:46:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:10.315 12:46:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:10.575 [ 00:27:10.575 { 00:27:10.575 "name": "BaseBdev4", 00:27:10.575 "aliases": [ 00:27:10.575 "0abfdb5f-7c43-4ae6-b3dd-c084b22053cd" 00:27:10.575 ], 00:27:10.575 "product_name": "Malloc disk", 00:27:10.575 "block_size": 512, 00:27:10.575 "num_blocks": 65536, 00:27:10.575 "uuid": "0abfdb5f-7c43-4ae6-b3dd-c084b22053cd", 00:27:10.575 "assigned_rate_limits": { 00:27:10.575 "rw_ios_per_sec": 0, 00:27:10.575 "rw_mbytes_per_sec": 0, 00:27:10.575 "r_mbytes_per_sec": 0, 00:27:10.575 "w_mbytes_per_sec": 0 00:27:10.575 }, 00:27:10.575 "claimed": true, 00:27:10.575 "claim_type": "exclusive_write", 00:27:10.575 "zoned": false, 00:27:10.575 "supported_io_types": { 00:27:10.575 "read": true, 00:27:10.575 "write": true, 00:27:10.575 "unmap": true, 00:27:10.575 "write_zeroes": true, 00:27:10.575 "flush": true, 00:27:10.575 "reset": true, 00:27:10.575 "compare": false, 00:27:10.575 "compare_and_write": false, 00:27:10.575 "abort": true, 00:27:10.575 "nvme_admin": false, 00:27:10.575 "nvme_io": false 00:27:10.575 }, 00:27:10.575 "memory_domains": [ 00:27:10.575 { 00:27:10.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:10.575 "dma_device_type": 2 00:27:10.575 } 00:27:10.575 ], 00:27:10.575 "driver_specific": {} 00:27:10.575 } 00:27:10.575 ] 00:27:10.575 12:46:52 -- common/autotest_common.sh@895 -- # return 0 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.575 12:46:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:10.835 12:46:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:10.835 "name": "Existed_Raid", 00:27:10.835 "uuid": "5cac7600-0c01-4f99-9b5f-0acfc52b6b70", 00:27:10.835 "strip_size_kb": 64, 00:27:10.835 "state": "online", 00:27:10.835 "raid_level": "raid5f", 00:27:10.835 "superblock": true, 00:27:10.835 "num_base_bdevs": 4, 00:27:10.835 "num_base_bdevs_discovered": 4, 00:27:10.835 "num_base_bdevs_operational": 4, 00:27:10.835 "base_bdevs_list": [ 00:27:10.835 { 00:27:10.835 "name": "BaseBdev1", 00:27:10.835 "uuid": "b2c50842-4e11-4091-a15f-c0fdfdbf61fe", 00:27:10.835 "is_configured": true, 00:27:10.835 "data_offset": 2048, 00:27:10.835 "data_size": 63488 00:27:10.835 }, 00:27:10.835 { 00:27:10.835 "name": "BaseBdev2", 00:27:10.835 "uuid": "9d537d2b-0066-411f-9c3d-e4b190f33da2", 00:27:10.835 "is_configured": true, 00:27:10.835 "data_offset": 2048, 00:27:10.835 "data_size": 63488 00:27:10.835 }, 00:27:10.835 { 00:27:10.835 "name": "BaseBdev3", 00:27:10.835 "uuid": "63b6c626-1e2c-49e0-b2b8-4e591f5e1c6f", 00:27:10.835 "is_configured": true, 00:27:10.835 "data_offset": 2048, 00:27:10.835 "data_size": 63488 00:27:10.835 }, 00:27:10.835 { 00:27:10.835 "name": "BaseBdev4", 00:27:10.835 "uuid": "0abfdb5f-7c43-4ae6-b3dd-c084b22053cd", 00:27:10.835 "is_configured": true, 00:27:10.835 "data_offset": 2048, 00:27:10.835 "data_size": 63488 00:27:10.835 } 00:27:10.835 ] 00:27:10.835 }' 00:27:10.835 12:46:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:10.835 12:46:53 -- common/autotest_common.sh@10 -- # set +x 00:27:11.405 12:46:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:11.405 [2024-10-01 12:46:53.881445] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@196 -- # return 0 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.666 12:46:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:11.666 12:46:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:11.666 "name": "Existed_Raid", 00:27:11.666 "uuid": "5cac7600-0c01-4f99-9b5f-0acfc52b6b70", 00:27:11.666 "strip_size_kb": 64, 00:27:11.666 "state": "online", 00:27:11.666 "raid_level": "raid5f", 00:27:11.666 "superblock": true, 00:27:11.666 "num_base_bdevs": 4, 00:27:11.666 "num_base_bdevs_discovered": 3, 00:27:11.666 "num_base_bdevs_operational": 3, 00:27:11.666 "base_bdevs_list": [ 00:27:11.666 { 00:27:11.666 "name": null, 00:27:11.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.666 "is_configured": false, 00:27:11.666 "data_offset": 2048, 00:27:11.666 "data_size": 63488 00:27:11.666 }, 00:27:11.666 { 00:27:11.666 "name": "BaseBdev2", 00:27:11.666 "uuid": "9d537d2b-0066-411f-9c3d-e4b190f33da2", 00:27:11.666 "is_configured": true, 00:27:11.666 "data_offset": 2048, 00:27:11.666 "data_size": 63488 00:27:11.666 }, 00:27:11.666 { 00:27:11.666 "name": "BaseBdev3", 00:27:11.666 "uuid": "63b6c626-1e2c-49e0-b2b8-4e591f5e1c6f", 00:27:11.666 "is_configured": true, 00:27:11.666 "data_offset": 2048, 00:27:11.666 "data_size": 63488 00:27:11.666 }, 00:27:11.666 { 00:27:11.666 "name": "BaseBdev4", 00:27:11.666 "uuid": "0abfdb5f-7c43-4ae6-b3dd-c084b22053cd", 00:27:11.666 "is_configured": true, 00:27:11.666 "data_offset": 2048, 00:27:11.666 "data_size": 63488 00:27:11.666 } 00:27:11.666 ] 00:27:11.666 }' 00:27:11.666 12:46:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:11.666 12:46:54 -- common/autotest_common.sh@10 -- # set +x 00:27:12.236 12:46:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:27:12.236 12:46:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:12.236 12:46:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.236 12:46:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:12.496 12:46:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:12.496 12:46:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:12.496 12:46:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:12.496 [2024-10-01 12:46:55.024462] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:12.496 [2024-10-01 12:46:55.024505] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:12.496 [2024-10-01 12:46:55.024568] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:12.755 12:46:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:12.755 12:46:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:12.755 12:46:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.755 12:46:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:13.015 12:46:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:13.015 12:46:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:13.015 12:46:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:13.015 [2024-10-01 12:46:55.489597] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:13.275 12:46:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:13.275 12:46:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:13.275 12:46:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.275 12:46:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:13.275 12:46:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:13.275 12:46:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:13.275 12:46:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:13.534 [2024-10-01 12:46:55.955066] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:13.534 [2024-10-01 12:46:55.955148] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:27:13.534 12:46:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:13.534 12:46:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:13.794 12:46:56 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.794 12:46:56 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:27:13.794 12:46:56 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:27:13.794 12:46:56 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:27:13.794 12:46:56 -- bdev/bdev_raid.sh@287 -- # killprocess 130115 00:27:13.794 12:46:56 -- common/autotest_common.sh@926 -- # '[' -z 130115 ']' 00:27:13.794 12:46:56 -- common/autotest_common.sh@930 -- # kill -0 130115 00:27:13.794 12:46:56 -- common/autotest_common.sh@931 -- # uname 00:27:13.794 12:46:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:13.794 12:46:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130115 00:27:13.794 12:46:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:13.794 killing process with pid 130115 00:27:13.794 12:46:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:13.794 12:46:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130115' 00:27:13.794 12:46:56 -- common/autotest_common.sh@945 -- # kill 130115 00:27:13.794 [2024-10-01 12:46:56.292714] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:13.794 12:46:56 -- common/autotest_common.sh@950 -- # wait 130115 00:27:13.794 [2024-10-01 12:46:56.292848] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@289 -- # return 0 00:27:15.177 00:27:15.177 real 0m13.348s 00:27:15.177 user 0m22.492s 00:27:15.177 sys 0m2.351s 00:27:15.177 ************************************ 00:27:15.177 END TEST raid5f_state_function_test_sb 00:27:15.177 ************************************ 00:27:15.177 12:46:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.177 12:46:57 -- common/autotest_common.sh@10 -- # set +x 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:27:15.177 12:46:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:15.177 12:46:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:15.177 12:46:57 -- common/autotest_common.sh@10 -- # set +x 00:27:15.177 ************************************ 00:27:15.177 START TEST raid5f_superblock_test 00:27:15.177 ************************************ 00:27:15.177 12:46:57 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@357 -- # raid_pid=130545 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130545 /var/tmp/spdk-raid.sock 00:27:15.177 12:46:57 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:15.177 12:46:57 -- common/autotest_common.sh@819 -- # '[' -z 130545 ']' 00:27:15.177 12:46:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:15.177 12:46:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:15.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:15.177 12:46:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:15.177 12:46:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:15.177 12:46:57 -- common/autotest_common.sh@10 -- # set +x 00:27:15.177 [2024-10-01 12:46:57.683823] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:15.177 [2024-10-01 12:46:57.684001] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130545 ] 00:27:15.437 [2024-10-01 12:46:57.841554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.697 [2024-10-01 12:46:58.051965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.957 [2024-10-01 12:46:58.236918] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:16.217 12:46:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:16.217 12:46:58 -- common/autotest_common.sh@852 -- # return 0 00:27:16.217 12:46:58 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:27:16.217 12:46:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:16.217 12:46:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:27:16.217 12:46:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:27:16.217 12:46:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:16.217 12:46:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:16.217 12:46:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:16.217 12:46:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:16.217 12:46:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:27:16.217 malloc1 00:27:16.217 12:46:58 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:16.477 [2024-10-01 12:46:58.867624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:16.477 [2024-10-01 12:46:58.867745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.477 [2024-10-01 12:46:58.867779] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:27:16.477 [2024-10-01 12:46:58.867828] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.477 [2024-10-01 12:46:58.870367] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.477 [2024-10-01 12:46:58.870419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:16.477 pt1 00:27:16.477 12:46:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:16.477 12:46:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:16.477 12:46:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:27:16.477 12:46:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:27:16.477 12:46:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:16.477 12:46:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:16.477 12:46:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:16.477 12:46:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:16.477 12:46:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:27:16.738 malloc2 00:27:16.738 12:46:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:16.996 [2024-10-01 12:46:59.298575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:16.996 [2024-10-01 12:46:59.298681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.996 [2024-10-01 12:46:59.298730] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:16.996 [2024-10-01 12:46:59.298805] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.996 [2024-10-01 12:46:59.301383] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.996 [2024-10-01 12:46:59.301436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:16.996 pt2 00:27:16.996 12:46:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:16.996 12:46:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:16.996 12:46:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:27:16.996 12:46:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:27:16.996 12:46:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:16.996 12:46:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:16.996 12:46:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:16.996 12:46:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:16.996 12:46:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:27:17.256 malloc3 00:27:17.256 12:46:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:17.256 [2024-10-01 12:46:59.735815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:17.256 [2024-10-01 12:46:59.735945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.256 [2024-10-01 12:46:59.735995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:17.256 [2024-10-01 12:46:59.736045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.256 [2024-10-01 12:46:59.738587] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.256 [2024-10-01 12:46:59.738644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:17.256 pt3 00:27:17.256 12:46:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:17.256 12:46:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:17.257 12:46:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:27:17.257 12:46:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:27:17.257 12:46:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:17.257 12:46:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:17.257 12:46:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:27:17.257 12:46:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:17.257 12:46:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:27:17.516 malloc4 00:27:17.516 12:46:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:17.776 [2024-10-01 12:47:00.160728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:17.776 [2024-10-01 12:47:00.160847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.776 [2024-10-01 12:47:00.160884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:17.776 [2024-10-01 12:47:00.160929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.776 [2024-10-01 12:47:00.163494] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.776 [2024-10-01 12:47:00.163555] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:17.776 pt4 00:27:17.776 12:47:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:27:17.776 12:47:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:27:17.776 12:47:00 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:27:18.036 [2024-10-01 12:47:00.340544] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:18.036 [2024-10-01 12:47:00.342731] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:18.036 [2024-10-01 12:47:00.342832] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:18.036 [2024-10-01 12:47:00.342903] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:18.036 [2024-10-01 12:47:00.343112] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:27:18.036 [2024-10-01 12:47:00.343122] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:18.036 [2024-10-01 12:47:00.343256] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:27:18.036 [2024-10-01 12:47:00.348684] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:27:18.036 [2024-10-01 12:47:00.348709] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:27:18.036 [2024-10-01 12:47:00.348928] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.036 12:47:00 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:18.036 12:47:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:18.036 12:47:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:18.036 12:47:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:18.036 12:47:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:18.036 12:47:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:18.036 12:47:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:18.037 12:47:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:18.037 12:47:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:18.037 12:47:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:18.037 12:47:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.037 12:47:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.037 12:47:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:18.037 "name": "raid_bdev1", 00:27:18.037 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:18.037 "strip_size_kb": 64, 00:27:18.037 "state": "online", 00:27:18.037 "raid_level": "raid5f", 00:27:18.037 "superblock": true, 00:27:18.037 "num_base_bdevs": 4, 00:27:18.037 "num_base_bdevs_discovered": 4, 00:27:18.037 "num_base_bdevs_operational": 4, 00:27:18.037 "base_bdevs_list": [ 00:27:18.037 { 00:27:18.037 "name": "pt1", 00:27:18.037 "uuid": "63dc643c-287b-57bd-a06f-6830cf1130c5", 00:27:18.037 "is_configured": true, 00:27:18.037 "data_offset": 2048, 00:27:18.037 "data_size": 63488 00:27:18.037 }, 00:27:18.037 { 00:27:18.037 "name": "pt2", 00:27:18.037 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:18.037 "is_configured": true, 00:27:18.037 "data_offset": 2048, 00:27:18.037 "data_size": 63488 00:27:18.037 }, 00:27:18.037 { 00:27:18.037 "name": "pt3", 00:27:18.037 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:18.037 "is_configured": true, 00:27:18.037 "data_offset": 2048, 00:27:18.037 "data_size": 63488 00:27:18.037 }, 00:27:18.037 { 00:27:18.037 "name": "pt4", 00:27:18.037 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:18.037 "is_configured": true, 00:27:18.037 "data_offset": 2048, 00:27:18.037 "data_size": 63488 00:27:18.037 } 00:27:18.037 ] 00:27:18.037 }' 00:27:18.037 12:47:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:18.037 12:47:00 -- common/autotest_common.sh@10 -- # set +x 00:27:18.606 12:47:01 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:18.606 12:47:01 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:27:18.865 [2024-10-01 12:47:01.300778] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:18.865 12:47:01 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=71a48193-6027-4b86-b2e0-2fb795a8e0f4 00:27:18.865 12:47:01 -- bdev/bdev_raid.sh@380 -- # '[' -z 71a48193-6027-4b86-b2e0-2fb795a8e0f4 ']' 00:27:18.865 12:47:01 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:19.127 [2024-10-01 12:47:01.496328] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:19.127 [2024-10-01 12:47:01.496377] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:19.127 [2024-10-01 12:47:01.496477] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:19.127 [2024-10-01 12:47:01.496595] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:19.127 [2024-10-01 12:47:01.496606] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:27:19.127 12:47:01 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.127 12:47:01 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:27:19.388 12:47:01 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:27:19.388 12:47:01 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:27:19.388 12:47:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:19.388 12:47:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:19.388 12:47:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:19.388 12:47:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:19.648 12:47:02 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:19.648 12:47:02 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:19.907 12:47:02 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:27:19.907 12:47:02 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:20.165 12:47:02 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:20.165 12:47:02 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:20.165 12:47:02 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:27:20.165 12:47:02 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:20.165 12:47:02 -- common/autotest_common.sh@640 -- # local es=0 00:27:20.165 12:47:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:20.165 12:47:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:20.165 12:47:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.165 12:47:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:20.165 12:47:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.165 12:47:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:20.165 12:47:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.165 12:47:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:20.165 12:47:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:20.165 12:47:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:20.424 [2024-10-01 12:47:02.834953] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:20.424 [2024-10-01 12:47:02.837213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:20.424 [2024-10-01 12:47:02.837286] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:20.424 [2024-10-01 12:47:02.837315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:20.424 [2024-10-01 12:47:02.837365] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:27:20.424 [2024-10-01 12:47:02.837455] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:27:20.424 [2024-10-01 12:47:02.837486] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:27:20.424 [2024-10-01 12:47:02.837540] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:27:20.424 [2024-10-01 12:47:02.837563] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:20.424 [2024-10-01 12:47:02.837573] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:27:20.424 request: 00:27:20.424 { 00:27:20.424 "name": "raid_bdev1", 00:27:20.424 "raid_level": "raid5f", 00:27:20.424 "base_bdevs": [ 00:27:20.424 "malloc1", 00:27:20.424 "malloc2", 00:27:20.424 "malloc3", 00:27:20.424 "malloc4" 00:27:20.424 ], 00:27:20.424 "superblock": false, 00:27:20.424 "strip_size_kb": 64, 00:27:20.424 "method": "bdev_raid_create", 00:27:20.424 "req_id": 1 00:27:20.424 } 00:27:20.424 Got JSON-RPC error response 00:27:20.424 response: 00:27:20.424 { 00:27:20.424 "code": -17, 00:27:20.424 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:20.424 } 00:27:20.424 12:47:02 -- common/autotest_common.sh@643 -- # es=1 00:27:20.424 12:47:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:20.424 12:47:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:20.424 12:47:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:20.424 12:47:02 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.424 12:47:02 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:27:20.683 12:47:03 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:27:20.683 12:47:03 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:27:20.683 12:47:03 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:20.683 [2024-10-01 12:47:03.198921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:20.683 [2024-10-01 12:47:03.199042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.683 [2024-10-01 12:47:03.199078] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:20.683 [2024-10-01 12:47:03.199108] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.683 [2024-10-01 12:47:03.201684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.683 [2024-10-01 12:47:03.201764] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:20.683 [2024-10-01 12:47:03.201890] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:27:20.683 [2024-10-01 12:47:03.201950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:20.683 pt1 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:20.942 "name": "raid_bdev1", 00:27:20.942 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:20.942 "strip_size_kb": 64, 00:27:20.942 "state": "configuring", 00:27:20.942 "raid_level": "raid5f", 00:27:20.942 "superblock": true, 00:27:20.942 "num_base_bdevs": 4, 00:27:20.942 "num_base_bdevs_discovered": 1, 00:27:20.942 "num_base_bdevs_operational": 4, 00:27:20.942 "base_bdevs_list": [ 00:27:20.942 { 00:27:20.942 "name": "pt1", 00:27:20.942 "uuid": "63dc643c-287b-57bd-a06f-6830cf1130c5", 00:27:20.942 "is_configured": true, 00:27:20.942 "data_offset": 2048, 00:27:20.942 "data_size": 63488 00:27:20.942 }, 00:27:20.942 { 00:27:20.942 "name": null, 00:27:20.942 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:20.942 "is_configured": false, 00:27:20.942 "data_offset": 2048, 00:27:20.942 "data_size": 63488 00:27:20.942 }, 00:27:20.942 { 00:27:20.942 "name": null, 00:27:20.942 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:20.942 "is_configured": false, 00:27:20.942 "data_offset": 2048, 00:27:20.942 "data_size": 63488 00:27:20.942 }, 00:27:20.942 { 00:27:20.942 "name": null, 00:27:20.942 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:20.942 "is_configured": false, 00:27:20.942 "data_offset": 2048, 00:27:20.942 "data_size": 63488 00:27:20.942 } 00:27:20.942 ] 00:27:20.942 }' 00:27:20.942 12:47:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:20.942 12:47:03 -- common/autotest_common.sh@10 -- # set +x 00:27:21.510 12:47:03 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:27:21.510 12:47:03 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:21.768 [2024-10-01 12:47:04.134938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:21.768 [2024-10-01 12:47:04.135041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.768 [2024-10-01 12:47:04.135088] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:21.768 [2024-10-01 12:47:04.135110] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.768 [2024-10-01 12:47:04.135640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.768 [2024-10-01 12:47:04.135686] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:21.768 [2024-10-01 12:47:04.135814] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:21.768 [2024-10-01 12:47:04.135839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:21.768 pt2 00:27:21.768 12:47:04 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:22.027 [2024-10-01 12:47:04.331024] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.027 12:47:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.028 12:47:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:22.028 "name": "raid_bdev1", 00:27:22.028 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:22.028 "strip_size_kb": 64, 00:27:22.028 "state": "configuring", 00:27:22.028 "raid_level": "raid5f", 00:27:22.028 "superblock": true, 00:27:22.028 "num_base_bdevs": 4, 00:27:22.028 "num_base_bdevs_discovered": 1, 00:27:22.028 "num_base_bdevs_operational": 4, 00:27:22.028 "base_bdevs_list": [ 00:27:22.028 { 00:27:22.028 "name": "pt1", 00:27:22.028 "uuid": "63dc643c-287b-57bd-a06f-6830cf1130c5", 00:27:22.028 "is_configured": true, 00:27:22.028 "data_offset": 2048, 00:27:22.028 "data_size": 63488 00:27:22.028 }, 00:27:22.028 { 00:27:22.028 "name": null, 00:27:22.028 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:22.028 "is_configured": false, 00:27:22.028 "data_offset": 2048, 00:27:22.028 "data_size": 63488 00:27:22.028 }, 00:27:22.028 { 00:27:22.028 "name": null, 00:27:22.028 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:22.028 "is_configured": false, 00:27:22.028 "data_offset": 2048, 00:27:22.028 "data_size": 63488 00:27:22.028 }, 00:27:22.028 { 00:27:22.028 "name": null, 00:27:22.028 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:22.028 "is_configured": false, 00:27:22.028 "data_offset": 2048, 00:27:22.028 "data_size": 63488 00:27:22.028 } 00:27:22.028 ] 00:27:22.028 }' 00:27:22.028 12:47:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:22.028 12:47:04 -- common/autotest_common.sh@10 -- # set +x 00:27:22.594 12:47:05 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:27:22.594 12:47:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:27:22.594 12:47:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:22.853 [2024-10-01 12:47:05.286433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:22.853 [2024-10-01 12:47:05.286532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.853 [2024-10-01 12:47:05.286573] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:22.853 [2024-10-01 12:47:05.286595] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.853 [2024-10-01 12:47:05.287105] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.853 [2024-10-01 12:47:05.287169] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:22.853 [2024-10-01 12:47:05.287273] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:22.853 [2024-10-01 12:47:05.287293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:22.853 pt2 00:27:22.853 12:47:05 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:27:22.853 12:47:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:27:22.853 12:47:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:23.112 [2024-10-01 12:47:05.470182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:23.112 [2024-10-01 12:47:05.470276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.112 [2024-10-01 12:47:05.470309] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:23.112 [2024-10-01 12:47:05.470335] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.112 [2024-10-01 12:47:05.470830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.112 [2024-10-01 12:47:05.470888] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:23.112 [2024-10-01 12:47:05.470997] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:27:23.112 [2024-10-01 12:47:05.471018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:23.112 pt3 00:27:23.112 12:47:05 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:27:23.112 12:47:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:27:23.112 12:47:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:23.371 [2024-10-01 12:47:05.653914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:23.371 [2024-10-01 12:47:05.654007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.371 [2024-10-01 12:47:05.654042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:23.371 [2024-10-01 12:47:05.654069] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.371 [2024-10-01 12:47:05.654514] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.371 [2024-10-01 12:47:05.654563] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:23.371 [2024-10-01 12:47:05.654671] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:27:23.371 [2024-10-01 12:47:05.654697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:23.371 [2024-10-01 12:47:05.654857] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:27:23.371 [2024-10-01 12:47:05.654866] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:23.371 [2024-10-01 12:47:05.654955] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:23.371 [2024-10-01 12:47:05.659460] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:27:23.371 [2024-10-01 12:47:05.659483] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:27:23.371 [2024-10-01 12:47:05.659674] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:23.371 pt4 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:23.371 "name": "raid_bdev1", 00:27:23.371 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:23.371 "strip_size_kb": 64, 00:27:23.371 "state": "online", 00:27:23.371 "raid_level": "raid5f", 00:27:23.371 "superblock": true, 00:27:23.371 "num_base_bdevs": 4, 00:27:23.371 "num_base_bdevs_discovered": 4, 00:27:23.371 "num_base_bdevs_operational": 4, 00:27:23.371 "base_bdevs_list": [ 00:27:23.371 { 00:27:23.371 "name": "pt1", 00:27:23.371 "uuid": "63dc643c-287b-57bd-a06f-6830cf1130c5", 00:27:23.371 "is_configured": true, 00:27:23.371 "data_offset": 2048, 00:27:23.371 "data_size": 63488 00:27:23.371 }, 00:27:23.371 { 00:27:23.371 "name": "pt2", 00:27:23.371 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:23.371 "is_configured": true, 00:27:23.371 "data_offset": 2048, 00:27:23.371 "data_size": 63488 00:27:23.371 }, 00:27:23.371 { 00:27:23.371 "name": "pt3", 00:27:23.371 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:23.371 "is_configured": true, 00:27:23.371 "data_offset": 2048, 00:27:23.371 "data_size": 63488 00:27:23.371 }, 00:27:23.371 { 00:27:23.371 "name": "pt4", 00:27:23.371 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:23.371 "is_configured": true, 00:27:23.371 "data_offset": 2048, 00:27:23.371 "data_size": 63488 00:27:23.371 } 00:27:23.371 ] 00:27:23.371 }' 00:27:23.371 12:47:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:23.371 12:47:05 -- common/autotest_common.sh@10 -- # set +x 00:27:23.939 12:47:06 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:23.939 12:47:06 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:27:24.198 [2024-10-01 12:47:06.599102] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:24.198 12:47:06 -- bdev/bdev_raid.sh@430 -- # '[' 71a48193-6027-4b86-b2e0-2fb795a8e0f4 '!=' 71a48193-6027-4b86-b2e0-2fb795a8e0f4 ']' 00:27:24.198 12:47:06 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:27:24.198 12:47:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:27:24.198 12:47:06 -- bdev/bdev_raid.sh@196 -- # return 0 00:27:24.198 12:47:06 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:24.457 [2024-10-01 12:47:06.791014] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.457 12:47:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.717 12:47:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:24.717 "name": "raid_bdev1", 00:27:24.717 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:24.717 "strip_size_kb": 64, 00:27:24.717 "state": "online", 00:27:24.717 "raid_level": "raid5f", 00:27:24.717 "superblock": true, 00:27:24.717 "num_base_bdevs": 4, 00:27:24.717 "num_base_bdevs_discovered": 3, 00:27:24.717 "num_base_bdevs_operational": 3, 00:27:24.717 "base_bdevs_list": [ 00:27:24.717 { 00:27:24.717 "name": null, 00:27:24.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.717 "is_configured": false, 00:27:24.717 "data_offset": 2048, 00:27:24.717 "data_size": 63488 00:27:24.717 }, 00:27:24.717 { 00:27:24.717 "name": "pt2", 00:27:24.717 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:24.717 "is_configured": true, 00:27:24.717 "data_offset": 2048, 00:27:24.717 "data_size": 63488 00:27:24.717 }, 00:27:24.717 { 00:27:24.717 "name": "pt3", 00:27:24.717 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:24.717 "is_configured": true, 00:27:24.717 "data_offset": 2048, 00:27:24.717 "data_size": 63488 00:27:24.717 }, 00:27:24.717 { 00:27:24.717 "name": "pt4", 00:27:24.717 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:24.717 "is_configured": true, 00:27:24.717 "data_offset": 2048, 00:27:24.717 "data_size": 63488 00:27:24.717 } 00:27:24.717 ] 00:27:24.717 }' 00:27:24.717 12:47:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:24.717 12:47:07 -- common/autotest_common.sh@10 -- # set +x 00:27:25.286 12:47:07 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:25.286 [2024-10-01 12:47:07.750936] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:25.286 [2024-10-01 12:47:07.750979] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:25.286 [2024-10-01 12:47:07.751084] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:25.286 [2024-10-01 12:47:07.751174] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:25.286 [2024-10-01 12:47:07.751184] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:27:25.286 12:47:07 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.286 12:47:07 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:27:25.574 12:47:07 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:27:25.574 12:47:07 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:27:25.574 12:47:07 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:27:25.574 12:47:07 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:27:25.574 12:47:07 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:25.833 12:47:08 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:27:25.833 12:47:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:27:25.833 12:47:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:25.833 12:47:08 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:27:25.833 12:47:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:27:25.833 12:47:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:26.092 12:47:08 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:27:26.092 12:47:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:27:26.092 12:47:08 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:27:26.092 12:47:08 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:27:26.092 12:47:08 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:26.352 [2024-10-01 12:47:08.697677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:26.352 [2024-10-01 12:47:08.697786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:26.352 [2024-10-01 12:47:08.697820] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:26.352 [2024-10-01 12:47:08.697858] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:26.352 [2024-10-01 12:47:08.700520] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:26.352 [2024-10-01 12:47:08.700605] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:26.352 [2024-10-01 12:47:08.700724] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:26.352 [2024-10-01 12:47:08.700781] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:26.352 pt2 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.352 12:47:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.613 12:47:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:26.613 "name": "raid_bdev1", 00:27:26.613 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:26.613 "strip_size_kb": 64, 00:27:26.613 "state": "configuring", 00:27:26.613 "raid_level": "raid5f", 00:27:26.613 "superblock": true, 00:27:26.613 "num_base_bdevs": 4, 00:27:26.613 "num_base_bdevs_discovered": 1, 00:27:26.613 "num_base_bdevs_operational": 3, 00:27:26.613 "base_bdevs_list": [ 00:27:26.613 { 00:27:26.613 "name": null, 00:27:26.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.613 "is_configured": false, 00:27:26.613 "data_offset": 2048, 00:27:26.613 "data_size": 63488 00:27:26.613 }, 00:27:26.613 { 00:27:26.613 "name": "pt2", 00:27:26.613 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:26.613 "is_configured": true, 00:27:26.613 "data_offset": 2048, 00:27:26.613 "data_size": 63488 00:27:26.613 }, 00:27:26.613 { 00:27:26.613 "name": null, 00:27:26.613 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:26.613 "is_configured": false, 00:27:26.613 "data_offset": 2048, 00:27:26.613 "data_size": 63488 00:27:26.613 }, 00:27:26.613 { 00:27:26.613 "name": null, 00:27:26.613 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:26.613 "is_configured": false, 00:27:26.613 "data_offset": 2048, 00:27:26.613 "data_size": 63488 00:27:26.613 } 00:27:26.613 ] 00:27:26.613 }' 00:27:26.613 12:47:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:26.613 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:27.185 [2024-10-01 12:47:09.604376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:27.185 [2024-10-01 12:47:09.604466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:27.185 [2024-10-01 12:47:09.604508] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:27:27.185 [2024-10-01 12:47:09.604530] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:27.185 [2024-10-01 12:47:09.604998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:27.185 [2024-10-01 12:47:09.605052] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:27.185 [2024-10-01 12:47:09.605178] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:27:27.185 [2024-10-01 12:47:09.605198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:27.185 pt3 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.185 12:47:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.445 12:47:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:27.445 "name": "raid_bdev1", 00:27:27.445 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:27.445 "strip_size_kb": 64, 00:27:27.445 "state": "configuring", 00:27:27.445 "raid_level": "raid5f", 00:27:27.445 "superblock": true, 00:27:27.445 "num_base_bdevs": 4, 00:27:27.445 "num_base_bdevs_discovered": 2, 00:27:27.445 "num_base_bdevs_operational": 3, 00:27:27.445 "base_bdevs_list": [ 00:27:27.445 { 00:27:27.445 "name": null, 00:27:27.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.445 "is_configured": false, 00:27:27.445 "data_offset": 2048, 00:27:27.445 "data_size": 63488 00:27:27.445 }, 00:27:27.445 { 00:27:27.445 "name": "pt2", 00:27:27.445 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:27.445 "is_configured": true, 00:27:27.445 "data_offset": 2048, 00:27:27.445 "data_size": 63488 00:27:27.445 }, 00:27:27.445 { 00:27:27.445 "name": "pt3", 00:27:27.445 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:27.445 "is_configured": true, 00:27:27.445 "data_offset": 2048, 00:27:27.445 "data_size": 63488 00:27:27.445 }, 00:27:27.445 { 00:27:27.445 "name": null, 00:27:27.445 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:27.445 "is_configured": false, 00:27:27.445 "data_offset": 2048, 00:27:27.445 "data_size": 63488 00:27:27.445 } 00:27:27.445 ] 00:27:27.445 }' 00:27:27.445 12:47:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:27.445 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:28.013 12:47:10 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:27:28.013 12:47:10 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:27:28.013 12:47:10 -- bdev/bdev_raid.sh@462 -- # i=3 00:27:28.013 12:47:10 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:28.013 [2024-10-01 12:47:10.543067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:28.013 [2024-10-01 12:47:10.543165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.013 [2024-10-01 12:47:10.543205] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:27:28.013 [2024-10-01 12:47:10.543228] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.013 [2024-10-01 12:47:10.543683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.013 [2024-10-01 12:47:10.543717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:28.013 [2024-10-01 12:47:10.543826] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:27:28.013 [2024-10-01 12:47:10.543846] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:28.013 [2024-10-01 12:47:10.543982] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:27:28.013 [2024-10-01 12:47:10.543991] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:28.013 [2024-10-01 12:47:10.544102] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:27:28.272 [2024-10-01 12:47:10.548893] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:27:28.272 [2024-10-01 12:47:10.548916] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:27:28.272 [2024-10-01 12:47:10.549169] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:28.272 pt4 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:28.272 "name": "raid_bdev1", 00:27:28.272 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:28.272 "strip_size_kb": 64, 00:27:28.272 "state": "online", 00:27:28.272 "raid_level": "raid5f", 00:27:28.272 "superblock": true, 00:27:28.272 "num_base_bdevs": 4, 00:27:28.272 "num_base_bdevs_discovered": 3, 00:27:28.272 "num_base_bdevs_operational": 3, 00:27:28.272 "base_bdevs_list": [ 00:27:28.272 { 00:27:28.272 "name": null, 00:27:28.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.272 "is_configured": false, 00:27:28.272 "data_offset": 2048, 00:27:28.272 "data_size": 63488 00:27:28.272 }, 00:27:28.272 { 00:27:28.272 "name": "pt2", 00:27:28.272 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:28.272 "is_configured": true, 00:27:28.272 "data_offset": 2048, 00:27:28.272 "data_size": 63488 00:27:28.272 }, 00:27:28.272 { 00:27:28.272 "name": "pt3", 00:27:28.272 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:28.272 "is_configured": true, 00:27:28.272 "data_offset": 2048, 00:27:28.272 "data_size": 63488 00:27:28.272 }, 00:27:28.272 { 00:27:28.272 "name": "pt4", 00:27:28.272 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:28.272 "is_configured": true, 00:27:28.272 "data_offset": 2048, 00:27:28.272 "data_size": 63488 00:27:28.272 } 00:27:28.272 ] 00:27:28.272 }' 00:27:28.272 12:47:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:28.272 12:47:10 -- common/autotest_common.sh@10 -- # set +x 00:27:28.840 12:47:11 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:27:28.840 12:47:11 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:29.098 [2024-10-01 12:47:11.468723] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:29.098 [2024-10-01 12:47:11.468771] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:29.098 [2024-10-01 12:47:11.468857] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:29.098 [2024-10-01 12:47:11.468944] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:29.098 [2024-10-01 12:47:11.468954] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:27:29.098 12:47:11 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:27:29.098 12:47:11 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:29.356 [2024-10-01 12:47:11.851744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:29.356 [2024-10-01 12:47:11.851870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.356 [2024-10-01 12:47:11.851926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:27:29.356 [2024-10-01 12:47:11.851951] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.356 [2024-10-01 12:47:11.854394] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.356 [2024-10-01 12:47:11.854476] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:29.356 [2024-10-01 12:47:11.854628] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:27:29.356 [2024-10-01 12:47:11.854676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:29.356 pt1 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.356 12:47:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.615 12:47:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:29.615 "name": "raid_bdev1", 00:27:29.615 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:29.615 "strip_size_kb": 64, 00:27:29.615 "state": "configuring", 00:27:29.615 "raid_level": "raid5f", 00:27:29.615 "superblock": true, 00:27:29.615 "num_base_bdevs": 4, 00:27:29.615 "num_base_bdevs_discovered": 1, 00:27:29.615 "num_base_bdevs_operational": 4, 00:27:29.615 "base_bdevs_list": [ 00:27:29.615 { 00:27:29.615 "name": "pt1", 00:27:29.615 "uuid": "63dc643c-287b-57bd-a06f-6830cf1130c5", 00:27:29.615 "is_configured": true, 00:27:29.615 "data_offset": 2048, 00:27:29.615 "data_size": 63488 00:27:29.615 }, 00:27:29.615 { 00:27:29.615 "name": null, 00:27:29.615 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:29.615 "is_configured": false, 00:27:29.615 "data_offset": 2048, 00:27:29.615 "data_size": 63488 00:27:29.615 }, 00:27:29.615 { 00:27:29.615 "name": null, 00:27:29.615 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:29.615 "is_configured": false, 00:27:29.615 "data_offset": 2048, 00:27:29.615 "data_size": 63488 00:27:29.615 }, 00:27:29.615 { 00:27:29.615 "name": null, 00:27:29.615 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:29.615 "is_configured": false, 00:27:29.615 "data_offset": 2048, 00:27:29.615 "data_size": 63488 00:27:29.615 } 00:27:29.615 ] 00:27:29.615 }' 00:27:29.615 12:47:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:29.615 12:47:12 -- common/autotest_common.sh@10 -- # set +x 00:27:30.182 12:47:12 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:27:30.182 12:47:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:27:30.182 12:47:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:30.441 12:47:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:27:30.441 12:47:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:27:30.441 12:47:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:30.699 12:47:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:27:30.699 12:47:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:27:30.699 12:47:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:30.699 12:47:13 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:27:30.699 12:47:13 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:27:30.699 12:47:13 -- bdev/bdev_raid.sh@489 -- # i=3 00:27:30.699 12:47:13 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:30.958 [2024-10-01 12:47:13.330935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:30.958 [2024-10-01 12:47:13.331053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.958 [2024-10-01 12:47:13.331090] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:27:30.958 [2024-10-01 12:47:13.331121] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.958 [2024-10-01 12:47:13.331639] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.958 [2024-10-01 12:47:13.331693] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:30.958 [2024-10-01 12:47:13.331820] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:27:30.958 [2024-10-01 12:47:13.331833] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:30.958 [2024-10-01 12:47:13.331842] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:30.958 [2024-10-01 12:47:13.331864] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:27:30.958 [2024-10-01 12:47:13.331958] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:30.958 pt4 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.958 12:47:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.215 12:47:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:31.216 "name": "raid_bdev1", 00:27:31.216 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:31.216 "strip_size_kb": 64, 00:27:31.216 "state": "configuring", 00:27:31.216 "raid_level": "raid5f", 00:27:31.216 "superblock": true, 00:27:31.216 "num_base_bdevs": 4, 00:27:31.216 "num_base_bdevs_discovered": 1, 00:27:31.216 "num_base_bdevs_operational": 3, 00:27:31.216 "base_bdevs_list": [ 00:27:31.216 { 00:27:31.216 "name": null, 00:27:31.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.216 "is_configured": false, 00:27:31.216 "data_offset": 2048, 00:27:31.216 "data_size": 63488 00:27:31.216 }, 00:27:31.216 { 00:27:31.216 "name": null, 00:27:31.216 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:31.216 "is_configured": false, 00:27:31.216 "data_offset": 2048, 00:27:31.216 "data_size": 63488 00:27:31.216 }, 00:27:31.216 { 00:27:31.216 "name": null, 00:27:31.216 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:31.216 "is_configured": false, 00:27:31.216 "data_offset": 2048, 00:27:31.216 "data_size": 63488 00:27:31.216 }, 00:27:31.216 { 00:27:31.216 "name": "pt4", 00:27:31.216 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:31.216 "is_configured": true, 00:27:31.216 "data_offset": 2048, 00:27:31.216 "data_size": 63488 00:27:31.216 } 00:27:31.216 ] 00:27:31.216 }' 00:27:31.216 12:47:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:31.216 12:47:13 -- common/autotest_common.sh@10 -- # set +x 00:27:31.781 12:47:14 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:27:31.781 12:47:14 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:27:31.781 12:47:14 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:31.781 [2024-10-01 12:47:14.245585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:31.781 [2024-10-01 12:47:14.245759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.781 [2024-10-01 12:47:14.245807] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:27:31.781 [2024-10-01 12:47:14.245837] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.781 [2024-10-01 12:47:14.246366] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.781 [2024-10-01 12:47:14.246425] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:31.781 [2024-10-01 12:47:14.246539] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:27:31.781 [2024-10-01 12:47:14.246572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:31.781 pt2 00:27:31.781 12:47:14 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:27:31.781 12:47:14 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:27:31.781 12:47:14 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:32.040 [2024-10-01 12:47:14.437333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:32.040 [2024-10-01 12:47:14.437442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:32.040 [2024-10-01 12:47:14.437480] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:27:32.040 [2024-10-01 12:47:14.437510] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:32.040 [2024-10-01 12:47:14.438010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:32.040 [2024-10-01 12:47:14.438064] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:32.040 [2024-10-01 12:47:14.438186] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:27:32.040 [2024-10-01 12:47:14.438211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:32.040 [2024-10-01 12:47:14.438342] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:27:32.040 [2024-10-01 12:47:14.438351] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:32.040 [2024-10-01 12:47:14.438449] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:27:32.040 [2024-10-01 12:47:14.443719] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:27:32.040 [2024-10-01 12:47:14.443748] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:27:32.040 [2024-10-01 12:47:14.444015] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.040 pt3 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.040 12:47:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.299 12:47:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:32.299 "name": "raid_bdev1", 00:27:32.299 "uuid": "71a48193-6027-4b86-b2e0-2fb795a8e0f4", 00:27:32.299 "strip_size_kb": 64, 00:27:32.299 "state": "online", 00:27:32.299 "raid_level": "raid5f", 00:27:32.299 "superblock": true, 00:27:32.299 "num_base_bdevs": 4, 00:27:32.299 "num_base_bdevs_discovered": 3, 00:27:32.299 "num_base_bdevs_operational": 3, 00:27:32.299 "base_bdevs_list": [ 00:27:32.299 { 00:27:32.299 "name": null, 00:27:32.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.299 "is_configured": false, 00:27:32.299 "data_offset": 2048, 00:27:32.299 "data_size": 63488 00:27:32.299 }, 00:27:32.299 { 00:27:32.299 "name": "pt2", 00:27:32.299 "uuid": "f90d34f2-a325-5ddc-b725-2f9291d67997", 00:27:32.299 "is_configured": true, 00:27:32.299 "data_offset": 2048, 00:27:32.299 "data_size": 63488 00:27:32.299 }, 00:27:32.299 { 00:27:32.299 "name": "pt3", 00:27:32.299 "uuid": "6b49547f-ed2f-5e80-a0c0-768849776ba8", 00:27:32.299 "is_configured": true, 00:27:32.299 "data_offset": 2048, 00:27:32.299 "data_size": 63488 00:27:32.299 }, 00:27:32.299 { 00:27:32.299 "name": "pt4", 00:27:32.299 "uuid": "cbc7df12-5a14-501a-ba18-6f9e48341dee", 00:27:32.299 "is_configured": true, 00:27:32.299 "data_offset": 2048, 00:27:32.299 "data_size": 63488 00:27:32.299 } 00:27:32.299 ] 00:27:32.299 }' 00:27:32.299 12:47:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:32.299 12:47:14 -- common/autotest_common.sh@10 -- # set +x 00:27:32.864 12:47:15 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:32.864 12:47:15 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:27:32.864 [2024-10-01 12:47:15.356199] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:32.864 12:47:15 -- bdev/bdev_raid.sh@506 -- # '[' 71a48193-6027-4b86-b2e0-2fb795a8e0f4 '!=' 71a48193-6027-4b86-b2e0-2fb795a8e0f4 ']' 00:27:32.864 12:47:15 -- bdev/bdev_raid.sh@511 -- # killprocess 130545 00:27:32.864 12:47:15 -- common/autotest_common.sh@926 -- # '[' -z 130545 ']' 00:27:32.864 12:47:15 -- common/autotest_common.sh@930 -- # kill -0 130545 00:27:32.864 12:47:15 -- common/autotest_common.sh@931 -- # uname 00:27:32.864 12:47:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:32.864 12:47:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130545 00:27:33.122 12:47:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:33.122 12:47:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:33.122 12:47:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130545' 00:27:33.122 killing process with pid 130545 00:27:33.122 12:47:15 -- common/autotest_common.sh@945 -- # kill 130545 00:27:33.122 [2024-10-01 12:47:15.413288] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:33.122 [2024-10-01 12:47:15.413384] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:33.122 [2024-10-01 12:47:15.413468] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:33.122 [2024-10-01 12:47:15.413480] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:27:33.122 12:47:15 -- common/autotest_common.sh@950 -- # wait 130545 00:27:33.379 [2024-10-01 12:47:15.776190] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@513 -- # return 0 00:27:34.752 00:27:34.752 real 0m19.409s 00:27:34.752 user 0m34.118s 00:27:34.752 sys 0m3.262s 00:27:34.752 12:47:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.752 12:47:17 -- common/autotest_common.sh@10 -- # set +x 00:27:34.752 ************************************ 00:27:34.752 END TEST raid5f_superblock_test 00:27:34.752 ************************************ 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:27:34.752 12:47:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:34.752 12:47:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:34.752 12:47:17 -- common/autotest_common.sh@10 -- # set +x 00:27:34.752 ************************************ 00:27:34.752 START TEST raid5f_rebuild_test 00:27:34.752 ************************************ 00:27:34.752 12:47:17 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@544 -- # raid_pid=131189 00:27:34.752 12:47:17 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131189 /var/tmp/spdk-raid.sock 00:27:34.752 12:47:17 -- common/autotest_common.sh@819 -- # '[' -z 131189 ']' 00:27:34.752 12:47:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:34.752 12:47:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:34.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:34.752 12:47:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:34.752 12:47:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:34.752 12:47:17 -- common/autotest_common.sh@10 -- # set +x 00:27:34.752 [2024-10-01 12:47:17.165861] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:34.752 [2024-10-01 12:47:17.166064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131189 ] 00:27:34.752 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:34.752 Zero copy mechanism will not be used. 00:27:35.009 [2024-10-01 12:47:17.349771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.267 [2024-10-01 12:47:17.589541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.528 [2024-10-01 12:47:17.845875] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:36.490 12:47:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:36.490 12:47:18 -- common/autotest_common.sh@852 -- # return 0 00:27:36.490 12:47:18 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:36.490 12:47:18 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:27:36.490 12:47:18 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:36.490 BaseBdev1 00:27:36.490 12:47:18 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:36.490 12:47:18 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:27:36.490 12:47:18 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:36.748 BaseBdev2 00:27:36.748 12:47:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:36.748 12:47:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:27:36.748 12:47:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:37.005 BaseBdev3 00:27:37.005 12:47:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:37.005 12:47:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:27:37.005 12:47:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:37.263 BaseBdev4 00:27:37.263 12:47:19 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:37.520 spare_malloc 00:27:37.520 12:47:19 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:37.520 spare_delay 00:27:37.777 12:47:20 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:37.777 [2024-10-01 12:47:20.235467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:37.777 [2024-10-01 12:47:20.235569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.777 [2024-10-01 12:47:20.235602] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:27:37.777 [2024-10-01 12:47:20.235646] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.777 [2024-10-01 12:47:20.238165] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.777 [2024-10-01 12:47:20.238415] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:37.777 spare 00:27:37.777 12:47:20 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:27:38.036 [2024-10-01 12:47:20.427244] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:38.036 [2024-10-01 12:47:20.429503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:38.036 [2024-10-01 12:47:20.429683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:38.036 [2024-10-01 12:47:20.429746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:38.036 [2024-10-01 12:47:20.429943] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:27:38.036 [2024-10-01 12:47:20.430132] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:38.036 [2024-10-01 12:47:20.430591] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:27:38.036 [2024-10-01 12:47:20.439054] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:27:38.036 [2024-10-01 12:47:20.439166] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:27:38.036 [2024-10-01 12:47:20.439478] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.036 12:47:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.296 12:47:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:38.296 "name": "raid_bdev1", 00:27:38.296 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:38.296 "strip_size_kb": 64, 00:27:38.296 "state": "online", 00:27:38.296 "raid_level": "raid5f", 00:27:38.296 "superblock": false, 00:27:38.296 "num_base_bdevs": 4, 00:27:38.296 "num_base_bdevs_discovered": 4, 00:27:38.296 "num_base_bdevs_operational": 4, 00:27:38.296 "base_bdevs_list": [ 00:27:38.296 { 00:27:38.296 "name": "BaseBdev1", 00:27:38.296 "uuid": "5769630c-8330-4e58-beac-eb508d991f28", 00:27:38.296 "is_configured": true, 00:27:38.296 "data_offset": 0, 00:27:38.296 "data_size": 65536 00:27:38.296 }, 00:27:38.296 { 00:27:38.296 "name": "BaseBdev2", 00:27:38.296 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:38.296 "is_configured": true, 00:27:38.296 "data_offset": 0, 00:27:38.296 "data_size": 65536 00:27:38.296 }, 00:27:38.296 { 00:27:38.296 "name": "BaseBdev3", 00:27:38.296 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:38.296 "is_configured": true, 00:27:38.296 "data_offset": 0, 00:27:38.296 "data_size": 65536 00:27:38.296 }, 00:27:38.296 { 00:27:38.296 "name": "BaseBdev4", 00:27:38.296 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:38.296 "is_configured": true, 00:27:38.296 "data_offset": 0, 00:27:38.296 "data_size": 65536 00:27:38.296 } 00:27:38.296 ] 00:27:38.296 }' 00:27:38.296 12:47:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:38.296 12:47:20 -- common/autotest_common.sh@10 -- # set +x 00:27:38.863 12:47:21 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:38.863 12:47:21 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:27:38.863 [2024-10-01 12:47:21.271713] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:38.863 12:47:21 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:27:38.863 12:47:21 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:38.863 12:47:21 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.122 12:47:21 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:27:39.122 12:47:21 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:27:39.122 12:47:21 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:27:39.122 12:47:21 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:39.122 12:47:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:39.122 12:47:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:39.122 12:47:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:39.122 12:47:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:39.122 12:47:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:39.122 12:47:21 -- bdev/nbd_common.sh@12 -- # local i 00:27:39.122 12:47:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:39.122 12:47:21 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:39.122 12:47:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:39.122 [2024-10-01 12:47:21.655022] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:27:39.380 /dev/nbd0 00:27:39.380 12:47:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:39.380 12:47:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:39.380 12:47:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:39.380 12:47:21 -- common/autotest_common.sh@857 -- # local i 00:27:39.381 12:47:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:39.381 12:47:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:39.381 12:47:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:39.381 12:47:21 -- common/autotest_common.sh@861 -- # break 00:27:39.381 12:47:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:39.381 12:47:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:39.381 12:47:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:39.381 1+0 records in 00:27:39.381 1+0 records out 00:27:39.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296254 s, 13.8 MB/s 00:27:39.381 12:47:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:39.381 12:47:21 -- common/autotest_common.sh@874 -- # size=4096 00:27:39.381 12:47:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:39.381 12:47:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:39.381 12:47:21 -- common/autotest_common.sh@877 -- # return 0 00:27:39.381 12:47:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:39.381 12:47:21 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:39.381 12:47:21 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:27:39.381 12:47:21 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:27:39.381 12:47:21 -- bdev/bdev_raid.sh@582 -- # echo 192 00:27:39.381 12:47:21 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:27:39.947 512+0 records in 00:27:39.947 512+0 records out 00:27:39.947 100663296 bytes (101 MB, 96 MiB) copied, 0.512033 s, 197 MB/s 00:27:39.947 12:47:22 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:39.947 12:47:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:39.947 12:47:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:39.947 12:47:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:39.947 12:47:22 -- bdev/nbd_common.sh@51 -- # local i 00:27:39.947 12:47:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:39.948 12:47:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:39.948 12:47:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:39.948 [2024-10-01 12:47:22.470962] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:39.948 12:47:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:39.948 12:47:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:39.948 12:47:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:39.948 12:47:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:39.948 12:47:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:39.948 12:47:22 -- bdev/nbd_common.sh@41 -- # break 00:27:39.948 12:47:22 -- bdev/nbd_common.sh@45 -- # return 0 00:27:39.948 12:47:22 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:40.206 [2024-10-01 12:47:22.656739] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.206 12:47:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.465 12:47:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:40.465 "name": "raid_bdev1", 00:27:40.465 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:40.465 "strip_size_kb": 64, 00:27:40.465 "state": "online", 00:27:40.465 "raid_level": "raid5f", 00:27:40.465 "superblock": false, 00:27:40.465 "num_base_bdevs": 4, 00:27:40.465 "num_base_bdevs_discovered": 3, 00:27:40.465 "num_base_bdevs_operational": 3, 00:27:40.465 "base_bdevs_list": [ 00:27:40.465 { 00:27:40.465 "name": null, 00:27:40.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.465 "is_configured": false, 00:27:40.465 "data_offset": 0, 00:27:40.465 "data_size": 65536 00:27:40.465 }, 00:27:40.465 { 00:27:40.465 "name": "BaseBdev2", 00:27:40.465 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:40.465 "is_configured": true, 00:27:40.465 "data_offset": 0, 00:27:40.465 "data_size": 65536 00:27:40.465 }, 00:27:40.465 { 00:27:40.465 "name": "BaseBdev3", 00:27:40.465 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:40.465 "is_configured": true, 00:27:40.465 "data_offset": 0, 00:27:40.465 "data_size": 65536 00:27:40.465 }, 00:27:40.465 { 00:27:40.465 "name": "BaseBdev4", 00:27:40.465 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:40.465 "is_configured": true, 00:27:40.465 "data_offset": 0, 00:27:40.465 "data_size": 65536 00:27:40.465 } 00:27:40.465 ] 00:27:40.465 }' 00:27:40.465 12:47:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:40.465 12:47:22 -- common/autotest_common.sh@10 -- # set +x 00:27:41.032 12:47:23 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:41.290 [2024-10-01 12:47:23.619306] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:41.290 [2024-10-01 12:47:23.619392] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:41.291 [2024-10-01 12:47:23.637823] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:27:41.291 [2024-10-01 12:47:23.649216] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:41.291 12:47:23 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:27:42.230 12:47:24 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.230 12:47:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:42.230 12:47:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:42.230 12:47:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:42.230 12:47:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:42.230 12:47:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.230 12:47:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.490 12:47:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:42.490 "name": "raid_bdev1", 00:27:42.490 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:42.490 "strip_size_kb": 64, 00:27:42.490 "state": "online", 00:27:42.490 "raid_level": "raid5f", 00:27:42.490 "superblock": false, 00:27:42.490 "num_base_bdevs": 4, 00:27:42.490 "num_base_bdevs_discovered": 4, 00:27:42.490 "num_base_bdevs_operational": 4, 00:27:42.490 "process": { 00:27:42.490 "type": "rebuild", 00:27:42.490 "target": "spare", 00:27:42.490 "progress": { 00:27:42.490 "blocks": 21120, 00:27:42.490 "percent": 10 00:27:42.490 } 00:27:42.490 }, 00:27:42.490 "base_bdevs_list": [ 00:27:42.490 { 00:27:42.490 "name": "spare", 00:27:42.490 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:42.490 "is_configured": true, 00:27:42.490 "data_offset": 0, 00:27:42.490 "data_size": 65536 00:27:42.490 }, 00:27:42.490 { 00:27:42.490 "name": "BaseBdev2", 00:27:42.490 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:42.490 "is_configured": true, 00:27:42.490 "data_offset": 0, 00:27:42.490 "data_size": 65536 00:27:42.490 }, 00:27:42.490 { 00:27:42.490 "name": "BaseBdev3", 00:27:42.490 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:42.490 "is_configured": true, 00:27:42.490 "data_offset": 0, 00:27:42.490 "data_size": 65536 00:27:42.490 }, 00:27:42.490 { 00:27:42.490 "name": "BaseBdev4", 00:27:42.490 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:42.490 "is_configured": true, 00:27:42.490 "data_offset": 0, 00:27:42.490 "data_size": 65536 00:27:42.490 } 00:27:42.490 ] 00:27:42.490 }' 00:27:42.490 12:47:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:42.490 12:47:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:42.490 12:47:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:42.490 12:47:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:42.490 12:47:24 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:42.749 [2024-10-01 12:47:25.104649] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:42.749 [2024-10-01 12:47:25.158998] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:42.749 [2024-10-01 12:47:25.159113] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.749 12:47:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.008 12:47:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:43.008 "name": "raid_bdev1", 00:27:43.008 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:43.008 "strip_size_kb": 64, 00:27:43.008 "state": "online", 00:27:43.008 "raid_level": "raid5f", 00:27:43.008 "superblock": false, 00:27:43.008 "num_base_bdevs": 4, 00:27:43.008 "num_base_bdevs_discovered": 3, 00:27:43.008 "num_base_bdevs_operational": 3, 00:27:43.008 "base_bdevs_list": [ 00:27:43.008 { 00:27:43.008 "name": null, 00:27:43.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.008 "is_configured": false, 00:27:43.008 "data_offset": 0, 00:27:43.008 "data_size": 65536 00:27:43.008 }, 00:27:43.008 { 00:27:43.008 "name": "BaseBdev2", 00:27:43.008 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:43.008 "is_configured": true, 00:27:43.008 "data_offset": 0, 00:27:43.008 "data_size": 65536 00:27:43.008 }, 00:27:43.008 { 00:27:43.008 "name": "BaseBdev3", 00:27:43.008 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:43.008 "is_configured": true, 00:27:43.008 "data_offset": 0, 00:27:43.008 "data_size": 65536 00:27:43.008 }, 00:27:43.008 { 00:27:43.008 "name": "BaseBdev4", 00:27:43.008 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:43.008 "is_configured": true, 00:27:43.008 "data_offset": 0, 00:27:43.008 "data_size": 65536 00:27:43.008 } 00:27:43.008 ] 00:27:43.008 }' 00:27:43.008 12:47:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:43.008 12:47:25 -- common/autotest_common.sh@10 -- # set +x 00:27:43.578 12:47:25 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:43.578 12:47:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:43.578 12:47:25 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:43.578 12:47:25 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:43.578 12:47:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:43.578 12:47:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.578 12:47:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.836 12:47:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:43.836 "name": "raid_bdev1", 00:27:43.836 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:43.836 "strip_size_kb": 64, 00:27:43.836 "state": "online", 00:27:43.836 "raid_level": "raid5f", 00:27:43.836 "superblock": false, 00:27:43.836 "num_base_bdevs": 4, 00:27:43.836 "num_base_bdevs_discovered": 3, 00:27:43.836 "num_base_bdevs_operational": 3, 00:27:43.836 "base_bdevs_list": [ 00:27:43.836 { 00:27:43.836 "name": null, 00:27:43.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.836 "is_configured": false, 00:27:43.836 "data_offset": 0, 00:27:43.836 "data_size": 65536 00:27:43.836 }, 00:27:43.836 { 00:27:43.836 "name": "BaseBdev2", 00:27:43.836 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:43.836 "is_configured": true, 00:27:43.836 "data_offset": 0, 00:27:43.836 "data_size": 65536 00:27:43.836 }, 00:27:43.836 { 00:27:43.836 "name": "BaseBdev3", 00:27:43.836 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:43.836 "is_configured": true, 00:27:43.836 "data_offset": 0, 00:27:43.836 "data_size": 65536 00:27:43.836 }, 00:27:43.836 { 00:27:43.836 "name": "BaseBdev4", 00:27:43.836 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:43.836 "is_configured": true, 00:27:43.836 "data_offset": 0, 00:27:43.836 "data_size": 65536 00:27:43.836 } 00:27:43.836 ] 00:27:43.836 }' 00:27:43.836 12:47:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:43.836 12:47:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:43.836 12:47:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:43.836 12:47:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:43.836 12:47:26 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:44.094 [2024-10-01 12:47:26.508583] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:44.094 [2024-10-01 12:47:26.508645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:44.094 [2024-10-01 12:47:26.525938] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:27:44.094 [2024-10-01 12:47:26.536132] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:44.094 12:47:26 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:27:45.032 12:47:27 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.032 12:47:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:45.032 12:47:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:45.032 12:47:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:45.032 12:47:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:45.032 12:47:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.032 12:47:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.293 12:47:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:45.293 "name": "raid_bdev1", 00:27:45.293 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:45.293 "strip_size_kb": 64, 00:27:45.293 "state": "online", 00:27:45.293 "raid_level": "raid5f", 00:27:45.293 "superblock": false, 00:27:45.293 "num_base_bdevs": 4, 00:27:45.293 "num_base_bdevs_discovered": 4, 00:27:45.293 "num_base_bdevs_operational": 4, 00:27:45.293 "process": { 00:27:45.293 "type": "rebuild", 00:27:45.293 "target": "spare", 00:27:45.293 "progress": { 00:27:45.293 "blocks": 21120, 00:27:45.293 "percent": 10 00:27:45.293 } 00:27:45.293 }, 00:27:45.293 "base_bdevs_list": [ 00:27:45.293 { 00:27:45.293 "name": "spare", 00:27:45.293 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:45.293 "is_configured": true, 00:27:45.293 "data_offset": 0, 00:27:45.293 "data_size": 65536 00:27:45.293 }, 00:27:45.293 { 00:27:45.293 "name": "BaseBdev2", 00:27:45.293 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:45.293 "is_configured": true, 00:27:45.293 "data_offset": 0, 00:27:45.293 "data_size": 65536 00:27:45.293 }, 00:27:45.293 { 00:27:45.293 "name": "BaseBdev3", 00:27:45.293 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:45.293 "is_configured": true, 00:27:45.293 "data_offset": 0, 00:27:45.295 "data_size": 65536 00:27:45.295 }, 00:27:45.295 { 00:27:45.295 "name": "BaseBdev4", 00:27:45.295 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:45.295 "is_configured": true, 00:27:45.295 "data_offset": 0, 00:27:45.295 "data_size": 65536 00:27:45.295 } 00:27:45.295 ] 00:27:45.295 }' 00:27:45.295 12:47:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:45.295 12:47:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:45.295 12:47:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@657 -- # local timeout=634 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.554 12:47:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.554 12:47:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:45.554 "name": "raid_bdev1", 00:27:45.554 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:45.554 "strip_size_kb": 64, 00:27:45.554 "state": "online", 00:27:45.554 "raid_level": "raid5f", 00:27:45.554 "superblock": false, 00:27:45.554 "num_base_bdevs": 4, 00:27:45.554 "num_base_bdevs_discovered": 4, 00:27:45.554 "num_base_bdevs_operational": 4, 00:27:45.554 "process": { 00:27:45.554 "type": "rebuild", 00:27:45.554 "target": "spare", 00:27:45.554 "progress": { 00:27:45.554 "blocks": 26880, 00:27:45.554 "percent": 13 00:27:45.554 } 00:27:45.554 }, 00:27:45.554 "base_bdevs_list": [ 00:27:45.554 { 00:27:45.554 "name": "spare", 00:27:45.554 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:45.554 "is_configured": true, 00:27:45.554 "data_offset": 0, 00:27:45.554 "data_size": 65536 00:27:45.554 }, 00:27:45.554 { 00:27:45.554 "name": "BaseBdev2", 00:27:45.554 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:45.554 "is_configured": true, 00:27:45.554 "data_offset": 0, 00:27:45.554 "data_size": 65536 00:27:45.554 }, 00:27:45.554 { 00:27:45.554 "name": "BaseBdev3", 00:27:45.554 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:45.554 "is_configured": true, 00:27:45.554 "data_offset": 0, 00:27:45.554 "data_size": 65536 00:27:45.554 }, 00:27:45.554 { 00:27:45.554 "name": "BaseBdev4", 00:27:45.554 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:45.554 "is_configured": true, 00:27:45.554 "data_offset": 0, 00:27:45.554 "data_size": 65536 00:27:45.554 } 00:27:45.554 ] 00:27:45.554 }' 00:27:45.554 12:47:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:45.554 12:47:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:45.554 12:47:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:45.812 12:47:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:45.812 12:47:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:46.748 12:47:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:46.748 12:47:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:46.748 12:47:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:46.748 12:47:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:46.748 12:47:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:46.748 12:47:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:46.748 12:47:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.748 12:47:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.007 12:47:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:47.007 "name": "raid_bdev1", 00:27:47.007 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:47.007 "strip_size_kb": 64, 00:27:47.007 "state": "online", 00:27:47.007 "raid_level": "raid5f", 00:27:47.007 "superblock": false, 00:27:47.007 "num_base_bdevs": 4, 00:27:47.007 "num_base_bdevs_discovered": 4, 00:27:47.007 "num_base_bdevs_operational": 4, 00:27:47.007 "process": { 00:27:47.007 "type": "rebuild", 00:27:47.007 "target": "spare", 00:27:47.007 "progress": { 00:27:47.007 "blocks": 51840, 00:27:47.007 "percent": 26 00:27:47.007 } 00:27:47.007 }, 00:27:47.007 "base_bdevs_list": [ 00:27:47.007 { 00:27:47.007 "name": "spare", 00:27:47.007 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:47.007 "is_configured": true, 00:27:47.007 "data_offset": 0, 00:27:47.007 "data_size": 65536 00:27:47.007 }, 00:27:47.007 { 00:27:47.007 "name": "BaseBdev2", 00:27:47.007 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:47.007 "is_configured": true, 00:27:47.007 "data_offset": 0, 00:27:47.007 "data_size": 65536 00:27:47.007 }, 00:27:47.007 { 00:27:47.007 "name": "BaseBdev3", 00:27:47.007 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:47.007 "is_configured": true, 00:27:47.007 "data_offset": 0, 00:27:47.007 "data_size": 65536 00:27:47.007 }, 00:27:47.007 { 00:27:47.007 "name": "BaseBdev4", 00:27:47.007 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:47.007 "is_configured": true, 00:27:47.007 "data_offset": 0, 00:27:47.007 "data_size": 65536 00:27:47.007 } 00:27:47.007 ] 00:27:47.007 }' 00:27:47.007 12:47:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:47.007 12:47:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.007 12:47:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:47.007 12:47:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.007 12:47:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:47.944 12:47:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:47.944 12:47:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.944 12:47:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:47.944 12:47:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:47.944 12:47:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:47.944 12:47:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:47.944 12:47:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.944 12:47:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.201 12:47:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:48.201 "name": "raid_bdev1", 00:27:48.201 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:48.201 "strip_size_kb": 64, 00:27:48.201 "state": "online", 00:27:48.201 "raid_level": "raid5f", 00:27:48.201 "superblock": false, 00:27:48.201 "num_base_bdevs": 4, 00:27:48.201 "num_base_bdevs_discovered": 4, 00:27:48.201 "num_base_bdevs_operational": 4, 00:27:48.201 "process": { 00:27:48.201 "type": "rebuild", 00:27:48.201 "target": "spare", 00:27:48.201 "progress": { 00:27:48.201 "blocks": 76800, 00:27:48.201 "percent": 39 00:27:48.201 } 00:27:48.201 }, 00:27:48.202 "base_bdevs_list": [ 00:27:48.202 { 00:27:48.202 "name": "spare", 00:27:48.202 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:48.202 "is_configured": true, 00:27:48.202 "data_offset": 0, 00:27:48.202 "data_size": 65536 00:27:48.202 }, 00:27:48.202 { 00:27:48.202 "name": "BaseBdev2", 00:27:48.202 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:48.202 "is_configured": true, 00:27:48.202 "data_offset": 0, 00:27:48.202 "data_size": 65536 00:27:48.202 }, 00:27:48.202 { 00:27:48.202 "name": "BaseBdev3", 00:27:48.202 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:48.202 "is_configured": true, 00:27:48.202 "data_offset": 0, 00:27:48.202 "data_size": 65536 00:27:48.202 }, 00:27:48.202 { 00:27:48.202 "name": "BaseBdev4", 00:27:48.202 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:48.202 "is_configured": true, 00:27:48.202 "data_offset": 0, 00:27:48.202 "data_size": 65536 00:27:48.202 } 00:27:48.202 ] 00:27:48.202 }' 00:27:48.202 12:47:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:48.202 12:47:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:48.202 12:47:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:48.202 12:47:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:48.202 12:47:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:49.577 12:47:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:49.577 12:47:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:49.577 12:47:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:49.577 12:47:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:49.577 12:47:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:49.577 12:47:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:49.577 12:47:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.577 12:47:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.577 12:47:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:49.577 "name": "raid_bdev1", 00:27:49.578 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:49.578 "strip_size_kb": 64, 00:27:49.578 "state": "online", 00:27:49.578 "raid_level": "raid5f", 00:27:49.578 "superblock": false, 00:27:49.578 "num_base_bdevs": 4, 00:27:49.578 "num_base_bdevs_discovered": 4, 00:27:49.578 "num_base_bdevs_operational": 4, 00:27:49.578 "process": { 00:27:49.578 "type": "rebuild", 00:27:49.578 "target": "spare", 00:27:49.578 "progress": { 00:27:49.578 "blocks": 101760, 00:27:49.578 "percent": 51 00:27:49.578 } 00:27:49.578 }, 00:27:49.578 "base_bdevs_list": [ 00:27:49.578 { 00:27:49.578 "name": "spare", 00:27:49.578 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:49.578 "is_configured": true, 00:27:49.578 "data_offset": 0, 00:27:49.578 "data_size": 65536 00:27:49.578 }, 00:27:49.578 { 00:27:49.578 "name": "BaseBdev2", 00:27:49.578 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:49.578 "is_configured": true, 00:27:49.578 "data_offset": 0, 00:27:49.578 "data_size": 65536 00:27:49.578 }, 00:27:49.578 { 00:27:49.578 "name": "BaseBdev3", 00:27:49.578 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:49.578 "is_configured": true, 00:27:49.578 "data_offset": 0, 00:27:49.578 "data_size": 65536 00:27:49.578 }, 00:27:49.578 { 00:27:49.578 "name": "BaseBdev4", 00:27:49.578 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:49.578 "is_configured": true, 00:27:49.578 "data_offset": 0, 00:27:49.578 "data_size": 65536 00:27:49.578 } 00:27:49.578 ] 00:27:49.578 }' 00:27:49.578 12:47:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:49.578 12:47:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:49.578 12:47:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:49.578 12:47:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:49.578 12:47:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:50.514 12:47:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:50.514 12:47:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:50.514 12:47:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:50.514 12:47:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:50.514 12:47:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:50.514 12:47:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:50.514 12:47:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.514 12:47:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.772 12:47:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:50.772 "name": "raid_bdev1", 00:27:50.772 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:50.772 "strip_size_kb": 64, 00:27:50.772 "state": "online", 00:27:50.772 "raid_level": "raid5f", 00:27:50.772 "superblock": false, 00:27:50.772 "num_base_bdevs": 4, 00:27:50.772 "num_base_bdevs_discovered": 4, 00:27:50.772 "num_base_bdevs_operational": 4, 00:27:50.772 "process": { 00:27:50.772 "type": "rebuild", 00:27:50.772 "target": "spare", 00:27:50.772 "progress": { 00:27:50.772 "blocks": 124800, 00:27:50.772 "percent": 63 00:27:50.772 } 00:27:50.772 }, 00:27:50.772 "base_bdevs_list": [ 00:27:50.772 { 00:27:50.772 "name": "spare", 00:27:50.772 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:50.772 "is_configured": true, 00:27:50.772 "data_offset": 0, 00:27:50.772 "data_size": 65536 00:27:50.772 }, 00:27:50.772 { 00:27:50.772 "name": "BaseBdev2", 00:27:50.772 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:50.772 "is_configured": true, 00:27:50.772 "data_offset": 0, 00:27:50.772 "data_size": 65536 00:27:50.772 }, 00:27:50.772 { 00:27:50.772 "name": "BaseBdev3", 00:27:50.772 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:50.772 "is_configured": true, 00:27:50.772 "data_offset": 0, 00:27:50.772 "data_size": 65536 00:27:50.772 }, 00:27:50.772 { 00:27:50.772 "name": "BaseBdev4", 00:27:50.772 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:50.772 "is_configured": true, 00:27:50.772 "data_offset": 0, 00:27:50.772 "data_size": 65536 00:27:50.772 } 00:27:50.772 ] 00:27:50.772 }' 00:27:50.772 12:47:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:50.772 12:47:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:50.772 12:47:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:50.772 12:47:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:50.772 12:47:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:52.154 "name": "raid_bdev1", 00:27:52.154 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:52.154 "strip_size_kb": 64, 00:27:52.154 "state": "online", 00:27:52.154 "raid_level": "raid5f", 00:27:52.154 "superblock": false, 00:27:52.154 "num_base_bdevs": 4, 00:27:52.154 "num_base_bdevs_discovered": 4, 00:27:52.154 "num_base_bdevs_operational": 4, 00:27:52.154 "process": { 00:27:52.154 "type": "rebuild", 00:27:52.154 "target": "spare", 00:27:52.154 "progress": { 00:27:52.154 "blocks": 149760, 00:27:52.154 "percent": 76 00:27:52.154 } 00:27:52.154 }, 00:27:52.154 "base_bdevs_list": [ 00:27:52.154 { 00:27:52.154 "name": "spare", 00:27:52.154 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:52.154 "is_configured": true, 00:27:52.154 "data_offset": 0, 00:27:52.154 "data_size": 65536 00:27:52.154 }, 00:27:52.154 { 00:27:52.154 "name": "BaseBdev2", 00:27:52.154 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:52.154 "is_configured": true, 00:27:52.154 "data_offset": 0, 00:27:52.154 "data_size": 65536 00:27:52.154 }, 00:27:52.154 { 00:27:52.154 "name": "BaseBdev3", 00:27:52.154 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:52.154 "is_configured": true, 00:27:52.154 "data_offset": 0, 00:27:52.154 "data_size": 65536 00:27:52.154 }, 00:27:52.154 { 00:27:52.154 "name": "BaseBdev4", 00:27:52.154 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:52.154 "is_configured": true, 00:27:52.154 "data_offset": 0, 00:27:52.154 "data_size": 65536 00:27:52.154 } 00:27:52.154 ] 00:27:52.154 }' 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:52.154 12:47:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:53.092 12:47:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:53.092 12:47:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:53.092 12:47:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:53.092 12:47:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:53.092 12:47:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:53.092 12:47:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:53.092 12:47:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.092 12:47:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.352 12:47:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:53.352 "name": "raid_bdev1", 00:27:53.352 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:53.352 "strip_size_kb": 64, 00:27:53.352 "state": "online", 00:27:53.352 "raid_level": "raid5f", 00:27:53.352 "superblock": false, 00:27:53.352 "num_base_bdevs": 4, 00:27:53.352 "num_base_bdevs_discovered": 4, 00:27:53.352 "num_base_bdevs_operational": 4, 00:27:53.352 "process": { 00:27:53.352 "type": "rebuild", 00:27:53.352 "target": "spare", 00:27:53.352 "progress": { 00:27:53.352 "blocks": 174720, 00:27:53.352 "percent": 88 00:27:53.352 } 00:27:53.352 }, 00:27:53.352 "base_bdevs_list": [ 00:27:53.352 { 00:27:53.352 "name": "spare", 00:27:53.352 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:53.352 "is_configured": true, 00:27:53.352 "data_offset": 0, 00:27:53.352 "data_size": 65536 00:27:53.352 }, 00:27:53.352 { 00:27:53.352 "name": "BaseBdev2", 00:27:53.352 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:53.352 "is_configured": true, 00:27:53.352 "data_offset": 0, 00:27:53.352 "data_size": 65536 00:27:53.352 }, 00:27:53.352 { 00:27:53.352 "name": "BaseBdev3", 00:27:53.352 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:53.352 "is_configured": true, 00:27:53.352 "data_offset": 0, 00:27:53.352 "data_size": 65536 00:27:53.352 }, 00:27:53.352 { 00:27:53.352 "name": "BaseBdev4", 00:27:53.352 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:53.352 "is_configured": true, 00:27:53.352 "data_offset": 0, 00:27:53.352 "data_size": 65536 00:27:53.352 } 00:27:53.352 ] 00:27:53.352 }' 00:27:53.352 12:47:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:53.352 12:47:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:53.352 12:47:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:53.352 12:47:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:53.352 12:47:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:54.306 12:47:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:54.306 12:47:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:54.306 12:47:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:54.306 12:47:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:54.306 12:47:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:54.306 12:47:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:54.306 12:47:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.306 12:47:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.570 [2024-10-01 12:47:36.881284] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:54.570 [2024-10-01 12:47:36.881380] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:54.570 [2024-10-01 12:47:36.881466] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:54.570 12:47:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:54.570 "name": "raid_bdev1", 00:27:54.570 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:54.570 "strip_size_kb": 64, 00:27:54.570 "state": "online", 00:27:54.570 "raid_level": "raid5f", 00:27:54.570 "superblock": false, 00:27:54.570 "num_base_bdevs": 4, 00:27:54.570 "num_base_bdevs_discovered": 4, 00:27:54.570 "num_base_bdevs_operational": 4, 00:27:54.570 "base_bdevs_list": [ 00:27:54.570 { 00:27:54.570 "name": "spare", 00:27:54.570 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:54.570 "is_configured": true, 00:27:54.570 "data_offset": 0, 00:27:54.570 "data_size": 65536 00:27:54.570 }, 00:27:54.570 { 00:27:54.570 "name": "BaseBdev2", 00:27:54.570 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:54.570 "is_configured": true, 00:27:54.570 "data_offset": 0, 00:27:54.570 "data_size": 65536 00:27:54.570 }, 00:27:54.570 { 00:27:54.570 "name": "BaseBdev3", 00:27:54.570 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:54.570 "is_configured": true, 00:27:54.570 "data_offset": 0, 00:27:54.570 "data_size": 65536 00:27:54.570 }, 00:27:54.570 { 00:27:54.570 "name": "BaseBdev4", 00:27:54.570 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:54.570 "is_configured": true, 00:27:54.570 "data_offset": 0, 00:27:54.570 "data_size": 65536 00:27:54.570 } 00:27:54.570 ] 00:27:54.570 }' 00:27:54.570 12:47:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:54.570 12:47:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@660 -- # break 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:54.832 "name": "raid_bdev1", 00:27:54.832 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:54.832 "strip_size_kb": 64, 00:27:54.832 "state": "online", 00:27:54.832 "raid_level": "raid5f", 00:27:54.832 "superblock": false, 00:27:54.832 "num_base_bdevs": 4, 00:27:54.832 "num_base_bdevs_discovered": 4, 00:27:54.832 "num_base_bdevs_operational": 4, 00:27:54.832 "base_bdevs_list": [ 00:27:54.832 { 00:27:54.832 "name": "spare", 00:27:54.832 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:54.832 "is_configured": true, 00:27:54.832 "data_offset": 0, 00:27:54.832 "data_size": 65536 00:27:54.832 }, 00:27:54.832 { 00:27:54.832 "name": "BaseBdev2", 00:27:54.832 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:54.832 "is_configured": true, 00:27:54.832 "data_offset": 0, 00:27:54.832 "data_size": 65536 00:27:54.832 }, 00:27:54.832 { 00:27:54.832 "name": "BaseBdev3", 00:27:54.832 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:54.832 "is_configured": true, 00:27:54.832 "data_offset": 0, 00:27:54.832 "data_size": 65536 00:27:54.832 }, 00:27:54.832 { 00:27:54.832 "name": "BaseBdev4", 00:27:54.832 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:54.832 "is_configured": true, 00:27:54.832 "data_offset": 0, 00:27:54.832 "data_size": 65536 00:27:54.832 } 00:27:54.832 ] 00:27:54.832 }' 00:27:54.832 12:47:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:55.092 "name": "raid_bdev1", 00:27:55.092 "uuid": "2e16f036-1b86-4366-b6fb-cceeb510295a", 00:27:55.092 "strip_size_kb": 64, 00:27:55.092 "state": "online", 00:27:55.092 "raid_level": "raid5f", 00:27:55.092 "superblock": false, 00:27:55.092 "num_base_bdevs": 4, 00:27:55.092 "num_base_bdevs_discovered": 4, 00:27:55.092 "num_base_bdevs_operational": 4, 00:27:55.092 "base_bdevs_list": [ 00:27:55.092 { 00:27:55.092 "name": "spare", 00:27:55.092 "uuid": "a7c66e20-55eb-5bcf-abea-ec234c6c9cd9", 00:27:55.092 "is_configured": true, 00:27:55.092 "data_offset": 0, 00:27:55.092 "data_size": 65536 00:27:55.092 }, 00:27:55.092 { 00:27:55.092 "name": "BaseBdev2", 00:27:55.092 "uuid": "26a06e1a-c3f0-4f06-90a7-8dc16ecc39da", 00:27:55.092 "is_configured": true, 00:27:55.092 "data_offset": 0, 00:27:55.092 "data_size": 65536 00:27:55.092 }, 00:27:55.092 { 00:27:55.092 "name": "BaseBdev3", 00:27:55.092 "uuid": "442ed94c-822a-4027-a9ab-46ead6f9319b", 00:27:55.092 "is_configured": true, 00:27:55.092 "data_offset": 0, 00:27:55.092 "data_size": 65536 00:27:55.092 }, 00:27:55.092 { 00:27:55.092 "name": "BaseBdev4", 00:27:55.092 "uuid": "9f9cb93e-d4dd-4668-83b3-f5387ccb3484", 00:27:55.092 "is_configured": true, 00:27:55.092 "data_offset": 0, 00:27:55.092 "data_size": 65536 00:27:55.092 } 00:27:55.092 ] 00:27:55.092 }' 00:27:55.092 12:47:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:55.092 12:47:37 -- common/autotest_common.sh@10 -- # set +x 00:27:55.662 12:47:38 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:55.922 [2024-10-01 12:47:38.319085] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:55.923 [2024-10-01 12:47:38.319120] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:55.923 [2024-10-01 12:47:38.319209] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:55.923 [2024-10-01 12:47:38.319295] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:55.923 [2024-10-01 12:47:38.319305] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:27:55.923 12:47:38 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.923 12:47:38 -- bdev/bdev_raid.sh@671 -- # jq length 00:27:56.183 12:47:38 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:27:56.183 12:47:38 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:27:56.183 12:47:38 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:56.183 12:47:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:56.183 12:47:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:56.183 12:47:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:56.183 12:47:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:56.183 12:47:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:56.183 12:47:38 -- bdev/nbd_common.sh@12 -- # local i 00:27:56.183 12:47:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:56.183 12:47:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:56.183 12:47:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:56.444 /dev/nbd0 00:27:56.444 12:47:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:56.444 12:47:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:56.444 12:47:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:56.444 12:47:38 -- common/autotest_common.sh@857 -- # local i 00:27:56.444 12:47:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:56.444 12:47:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:56.444 12:47:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:56.444 12:47:38 -- common/autotest_common.sh@861 -- # break 00:27:56.444 12:47:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:56.444 12:47:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:56.444 12:47:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:56.444 1+0 records in 00:27:56.444 1+0 records out 00:27:56.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357197 s, 11.5 MB/s 00:27:56.444 12:47:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:56.444 12:47:38 -- common/autotest_common.sh@874 -- # size=4096 00:27:56.444 12:47:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:56.444 12:47:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:56.444 12:47:38 -- common/autotest_common.sh@877 -- # return 0 00:27:56.444 12:47:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:56.444 12:47:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:56.444 12:47:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:56.703 /dev/nbd1 00:27:56.703 12:47:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:56.703 12:47:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:56.703 12:47:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:27:56.703 12:47:39 -- common/autotest_common.sh@857 -- # local i 00:27:56.703 12:47:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:56.703 12:47:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:56.703 12:47:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:27:56.703 12:47:39 -- common/autotest_common.sh@861 -- # break 00:27:56.703 12:47:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:56.703 12:47:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:56.703 12:47:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:56.703 1+0 records in 00:27:56.703 1+0 records out 00:27:56.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413095 s, 9.9 MB/s 00:27:56.703 12:47:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:56.703 12:47:39 -- common/autotest_common.sh@874 -- # size=4096 00:27:56.703 12:47:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:56.703 12:47:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:56.703 12:47:39 -- common/autotest_common.sh@877 -- # return 0 00:27:56.703 12:47:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:56.703 12:47:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:56.703 12:47:39 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:56.963 12:47:39 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@51 -- # local i 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@41 -- # break 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.963 12:47:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:57.223 12:47:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:57.223 12:47:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:57.223 12:47:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:57.223 12:47:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:57.223 12:47:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:57.223 12:47:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:57.223 12:47:39 -- bdev/nbd_common.sh@41 -- # break 00:27:57.223 12:47:39 -- bdev/nbd_common.sh@45 -- # return 0 00:27:57.223 12:47:39 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:27:57.223 12:47:39 -- bdev/bdev_raid.sh@709 -- # killprocess 131189 00:27:57.223 12:47:39 -- common/autotest_common.sh@926 -- # '[' -z 131189 ']' 00:27:57.223 12:47:39 -- common/autotest_common.sh@930 -- # kill -0 131189 00:27:57.223 12:47:39 -- common/autotest_common.sh@931 -- # uname 00:27:57.223 12:47:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:57.223 12:47:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131189 00:27:57.223 12:47:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:57.223 12:47:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:57.223 12:47:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131189' 00:27:57.223 killing process with pid 131189 00:27:57.223 12:47:39 -- common/autotest_common.sh@945 -- # kill 131189 00:27:57.223 Received shutdown signal, test time was about 60.000000 seconds 00:27:57.223 00:27:57.223 Latency(us) 00:27:57.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.223 =================================================================================================================== 00:27:57.223 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:57.223 12:47:39 -- common/autotest_common.sh@950 -- # wait 131189 00:27:57.223 [2024-10-01 12:47:39.731108] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:57.792 [2024-10-01 12:47:40.261858] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:59.173 12:47:41 -- bdev/bdev_raid.sh@711 -- # return 0 00:27:59.173 00:27:59.173 real 0m24.582s 00:27:59.173 user 0m33.654s 00:27:59.173 sys 0m3.312s 00:27:59.173 12:47:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.173 12:47:41 -- common/autotest_common.sh@10 -- # set +x 00:27:59.173 ************************************ 00:27:59.173 END TEST raid5f_rebuild_test 00:27:59.173 ************************************ 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:27:59.433 12:47:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:59.433 12:47:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:59.433 12:47:41 -- common/autotest_common.sh@10 -- # set +x 00:27:59.433 ************************************ 00:27:59.433 START TEST raid5f_rebuild_test_sb 00:27:59.433 ************************************ 00:27:59.433 12:47:41 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@544 -- # raid_pid=131803 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:59.433 12:47:41 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131803 /var/tmp/spdk-raid.sock 00:27:59.433 12:47:41 -- common/autotest_common.sh@819 -- # '[' -z 131803 ']' 00:27:59.433 12:47:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:59.433 12:47:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:59.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:59.433 12:47:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:59.433 12:47:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:59.433 12:47:41 -- common/autotest_common.sh@10 -- # set +x 00:27:59.433 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:59.433 Zero copy mechanism will not be used. 00:27:59.433 [2024-10-01 12:47:41.846527] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:59.433 [2024-10-01 12:47:41.846684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131803 ] 00:27:59.692 [2024-10-01 12:47:42.013842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.951 [2024-10-01 12:47:42.239930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.211 [2024-10-01 12:47:42.489305] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:00.211 12:47:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:00.211 12:47:42 -- common/autotest_common.sh@852 -- # return 0 00:28:00.211 12:47:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:00.211 12:47:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:00.211 12:47:42 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:00.469 BaseBdev1_malloc 00:28:00.469 12:47:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:00.728 [2024-10-01 12:47:43.048895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:00.728 [2024-10-01 12:47:43.048994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.728 [2024-10-01 12:47:43.049042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:28:00.728 [2024-10-01 12:47:43.049094] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.728 [2024-10-01 12:47:43.051616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.728 [2024-10-01 12:47:43.051674] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:00.728 BaseBdev1 00:28:00.728 12:47:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:00.728 12:47:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:00.728 12:47:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:00.988 BaseBdev2_malloc 00:28:00.988 12:47:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:00.988 [2024-10-01 12:47:43.442252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:00.988 [2024-10-01 12:47:43.442351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.989 [2024-10-01 12:47:43.442410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:00.989 [2024-10-01 12:47:43.442466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.989 [2024-10-01 12:47:43.444960] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.989 [2024-10-01 12:47:43.445023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:00.989 BaseBdev2 00:28:00.989 12:47:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:00.989 12:47:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:00.989 12:47:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:01.247 BaseBdev3_malloc 00:28:01.247 12:47:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:01.505 [2024-10-01 12:47:43.862497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:01.505 [2024-10-01 12:47:43.862577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.505 [2024-10-01 12:47:43.862617] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:28:01.505 [2024-10-01 12:47:43.862663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.505 [2024-10-01 12:47:43.865142] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.505 [2024-10-01 12:47:43.865200] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:01.505 BaseBdev3 00:28:01.505 12:47:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:01.505 12:47:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:01.505 12:47:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:01.764 BaseBdev4_malloc 00:28:01.764 12:47:44 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:01.764 [2024-10-01 12:47:44.273015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:01.764 [2024-10-01 12:47:44.273102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.764 [2024-10-01 12:47:44.273152] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:01.764 [2024-10-01 12:47:44.273196] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.764 [2024-10-01 12:47:44.275667] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.764 [2024-10-01 12:47:44.275727] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:01.764 BaseBdev4 00:28:01.764 12:47:44 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:02.024 spare_malloc 00:28:02.024 12:47:44 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:02.283 spare_delay 00:28:02.283 12:47:44 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:02.540 [2024-10-01 12:47:44.836472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:02.540 [2024-10-01 12:47:44.836570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:02.540 [2024-10-01 12:47:44.836626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:02.540 [2024-10-01 12:47:44.836669] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:02.540 [2024-10-01 12:47:44.839148] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:02.540 [2024-10-01 12:47:44.839213] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:02.540 spare 00:28:02.540 12:47:44 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:02.540 [2024-10-01 12:47:45.016313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:02.540 [2024-10-01 12:47:45.018341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:02.540 [2024-10-01 12:47:45.018409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:02.541 [2024-10-01 12:47:45.018454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:02.541 [2024-10-01 12:47:45.018635] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:28:02.541 [2024-10-01 12:47:45.018644] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:02.541 [2024-10-01 12:47:45.018748] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:28:02.541 [2024-10-01 12:47:45.027255] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:28:02.541 [2024-10-01 12:47:45.027278] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:28:02.541 [2024-10-01 12:47:45.027481] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.541 12:47:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.799 12:47:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:02.799 "name": "raid_bdev1", 00:28:02.799 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:02.799 "strip_size_kb": 64, 00:28:02.799 "state": "online", 00:28:02.799 "raid_level": "raid5f", 00:28:02.799 "superblock": true, 00:28:02.799 "num_base_bdevs": 4, 00:28:02.799 "num_base_bdevs_discovered": 4, 00:28:02.799 "num_base_bdevs_operational": 4, 00:28:02.799 "base_bdevs_list": [ 00:28:02.799 { 00:28:02.799 "name": "BaseBdev1", 00:28:02.799 "uuid": "50a4a38f-0e8e-597d-9465-a04836dd5cfd", 00:28:02.799 "is_configured": true, 00:28:02.799 "data_offset": 2048, 00:28:02.799 "data_size": 63488 00:28:02.799 }, 00:28:02.799 { 00:28:02.799 "name": "BaseBdev2", 00:28:02.799 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:02.799 "is_configured": true, 00:28:02.799 "data_offset": 2048, 00:28:02.799 "data_size": 63488 00:28:02.799 }, 00:28:02.799 { 00:28:02.799 "name": "BaseBdev3", 00:28:02.799 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:02.799 "is_configured": true, 00:28:02.799 "data_offset": 2048, 00:28:02.799 "data_size": 63488 00:28:02.799 }, 00:28:02.799 { 00:28:02.799 "name": "BaseBdev4", 00:28:02.799 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:02.799 "is_configured": true, 00:28:02.799 "data_offset": 2048, 00:28:02.799 "data_size": 63488 00:28:02.799 } 00:28:02.799 ] 00:28:02.799 }' 00:28:02.799 12:47:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:02.799 12:47:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.437 12:47:45 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:03.437 12:47:45 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:28:03.437 [2024-10-01 12:47:45.919673] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:03.437 12:47:45 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:28:03.437 12:47:45 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.437 12:47:45 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:03.696 12:47:46 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:28:03.696 12:47:46 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:28:03.696 12:47:46 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:28:03.696 12:47:46 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:03.696 12:47:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:03.696 12:47:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:03.696 12:47:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:03.696 12:47:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:03.696 12:47:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:03.696 12:47:46 -- bdev/nbd_common.sh@12 -- # local i 00:28:03.696 12:47:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:03.696 12:47:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:03.696 12:47:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:03.954 [2024-10-01 12:47:46.291216] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:03.954 /dev/nbd0 00:28:03.954 12:47:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:03.954 12:47:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:03.954 12:47:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:03.954 12:47:46 -- common/autotest_common.sh@857 -- # local i 00:28:03.954 12:47:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:03.954 12:47:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:03.954 12:47:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:03.954 12:47:46 -- common/autotest_common.sh@861 -- # break 00:28:03.954 12:47:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:03.954 12:47:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:03.954 12:47:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:03.954 1+0 records in 00:28:03.954 1+0 records out 00:28:03.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373593 s, 11.0 MB/s 00:28:03.954 12:47:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:03.954 12:47:46 -- common/autotest_common.sh@874 -- # size=4096 00:28:03.954 12:47:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:03.954 12:47:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:03.954 12:47:46 -- common/autotest_common.sh@877 -- # return 0 00:28:03.954 12:47:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:03.954 12:47:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:03.954 12:47:46 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:28:03.954 12:47:46 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:28:03.954 12:47:46 -- bdev/bdev_raid.sh@582 -- # echo 192 00:28:03.954 12:47:46 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:28:04.519 496+0 records in 00:28:04.519 496+0 records out 00:28:04.519 97517568 bytes (98 MB, 93 MiB) copied, 0.49448 s, 197 MB/s 00:28:04.519 12:47:46 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:04.519 12:47:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:04.519 12:47:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:04.519 12:47:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:04.519 12:47:46 -- bdev/nbd_common.sh@51 -- # local i 00:28:04.519 12:47:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:04.519 12:47:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:04.777 12:47:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:04.777 [2024-10-01 12:47:47.065585] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.777 12:47:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:04.777 12:47:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:04.777 12:47:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:04.777 12:47:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:04.777 12:47:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:04.777 12:47:47 -- bdev/nbd_common.sh@41 -- # break 00:28:04.777 12:47:47 -- bdev/nbd_common.sh@45 -- # return 0 00:28:04.777 12:47:47 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:04.777 [2024-10-01 12:47:47.227387] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:04.777 12:47:47 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:04.777 12:47:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:04.777 12:47:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:04.777 12:47:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:04.777 12:47:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:04.777 12:47:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:04.777 12:47:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:04.777 12:47:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:04.777 12:47:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:04.778 12:47:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:04.778 12:47:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.778 12:47:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.036 12:47:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:05.036 "name": "raid_bdev1", 00:28:05.036 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:05.036 "strip_size_kb": 64, 00:28:05.036 "state": "online", 00:28:05.036 "raid_level": "raid5f", 00:28:05.036 "superblock": true, 00:28:05.036 "num_base_bdevs": 4, 00:28:05.036 "num_base_bdevs_discovered": 3, 00:28:05.036 "num_base_bdevs_operational": 3, 00:28:05.036 "base_bdevs_list": [ 00:28:05.036 { 00:28:05.036 "name": null, 00:28:05.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.036 "is_configured": false, 00:28:05.036 "data_offset": 2048, 00:28:05.036 "data_size": 63488 00:28:05.036 }, 00:28:05.036 { 00:28:05.036 "name": "BaseBdev2", 00:28:05.036 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:05.036 "is_configured": true, 00:28:05.036 "data_offset": 2048, 00:28:05.036 "data_size": 63488 00:28:05.036 }, 00:28:05.036 { 00:28:05.036 "name": "BaseBdev3", 00:28:05.036 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:05.036 "is_configured": true, 00:28:05.036 "data_offset": 2048, 00:28:05.036 "data_size": 63488 00:28:05.036 }, 00:28:05.036 { 00:28:05.036 "name": "BaseBdev4", 00:28:05.036 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:05.036 "is_configured": true, 00:28:05.036 "data_offset": 2048, 00:28:05.036 "data_size": 63488 00:28:05.036 } 00:28:05.036 ] 00:28:05.036 }' 00:28:05.036 12:47:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:05.036 12:47:47 -- common/autotest_common.sh@10 -- # set +x 00:28:05.604 12:47:47 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:05.862 [2024-10-01 12:47:48.162682] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:28:05.862 [2024-10-01 12:47:48.162728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:05.862 [2024-10-01 12:47:48.180475] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:28:05.862 [2024-10-01 12:47:48.191431] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:05.862 12:47:48 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:28:06.840 12:47:49 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:06.840 12:47:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:06.840 12:47:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:06.840 12:47:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:06.840 12:47:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:06.840 12:47:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.840 12:47:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.098 12:47:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:07.098 "name": "raid_bdev1", 00:28:07.098 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:07.098 "strip_size_kb": 64, 00:28:07.098 "state": "online", 00:28:07.098 "raid_level": "raid5f", 00:28:07.098 "superblock": true, 00:28:07.098 "num_base_bdevs": 4, 00:28:07.098 "num_base_bdevs_discovered": 4, 00:28:07.098 "num_base_bdevs_operational": 4, 00:28:07.098 "process": { 00:28:07.098 "type": "rebuild", 00:28:07.098 "target": "spare", 00:28:07.098 "progress": { 00:28:07.098 "blocks": 21120, 00:28:07.098 "percent": 11 00:28:07.098 } 00:28:07.098 }, 00:28:07.098 "base_bdevs_list": [ 00:28:07.098 { 00:28:07.099 "name": "spare", 00:28:07.099 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:07.099 "is_configured": true, 00:28:07.099 "data_offset": 2048, 00:28:07.099 "data_size": 63488 00:28:07.099 }, 00:28:07.099 { 00:28:07.099 "name": "BaseBdev2", 00:28:07.099 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:07.099 "is_configured": true, 00:28:07.099 "data_offset": 2048, 00:28:07.099 "data_size": 63488 00:28:07.099 }, 00:28:07.099 { 00:28:07.099 "name": "BaseBdev3", 00:28:07.099 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:07.099 "is_configured": true, 00:28:07.099 "data_offset": 2048, 00:28:07.099 "data_size": 63488 00:28:07.099 }, 00:28:07.099 { 00:28:07.099 "name": "BaseBdev4", 00:28:07.099 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:07.099 "is_configured": true, 00:28:07.099 "data_offset": 2048, 00:28:07.099 "data_size": 63488 00:28:07.099 } 00:28:07.099 ] 00:28:07.099 }' 00:28:07.099 12:47:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:07.099 12:47:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:07.099 12:47:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:07.099 12:47:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:07.099 12:47:49 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:07.357 [2024-10-01 12:47:49.667490] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:07.357 [2024-10-01 12:47:49.699518] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:07.357 [2024-10-01 12:47:49.699616] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.357 12:47:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.615 12:47:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:07.615 "name": "raid_bdev1", 00:28:07.615 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:07.615 "strip_size_kb": 64, 00:28:07.615 "state": "online", 00:28:07.615 "raid_level": "raid5f", 00:28:07.615 "superblock": true, 00:28:07.615 "num_base_bdevs": 4, 00:28:07.615 "num_base_bdevs_discovered": 3, 00:28:07.615 "num_base_bdevs_operational": 3, 00:28:07.615 "base_bdevs_list": [ 00:28:07.615 { 00:28:07.615 "name": null, 00:28:07.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.615 "is_configured": false, 00:28:07.615 "data_offset": 2048, 00:28:07.615 "data_size": 63488 00:28:07.615 }, 00:28:07.615 { 00:28:07.615 "name": "BaseBdev2", 00:28:07.615 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:07.615 "is_configured": true, 00:28:07.615 "data_offset": 2048, 00:28:07.615 "data_size": 63488 00:28:07.615 }, 00:28:07.615 { 00:28:07.615 "name": "BaseBdev3", 00:28:07.615 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:07.615 "is_configured": true, 00:28:07.615 "data_offset": 2048, 00:28:07.615 "data_size": 63488 00:28:07.615 }, 00:28:07.615 { 00:28:07.615 "name": "BaseBdev4", 00:28:07.615 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:07.615 "is_configured": true, 00:28:07.615 "data_offset": 2048, 00:28:07.615 "data_size": 63488 00:28:07.615 } 00:28:07.615 ] 00:28:07.615 }' 00:28:07.615 12:47:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:07.615 12:47:49 -- common/autotest_common.sh@10 -- # set +x 00:28:08.183 12:47:50 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:08.183 12:47:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:08.183 12:47:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:08.183 12:47:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:08.183 12:47:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:08.183 12:47:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.183 12:47:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.442 12:47:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:08.442 "name": "raid_bdev1", 00:28:08.442 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:08.442 "strip_size_kb": 64, 00:28:08.442 "state": "online", 00:28:08.442 "raid_level": "raid5f", 00:28:08.442 "superblock": true, 00:28:08.442 "num_base_bdevs": 4, 00:28:08.442 "num_base_bdevs_discovered": 3, 00:28:08.442 "num_base_bdevs_operational": 3, 00:28:08.442 "base_bdevs_list": [ 00:28:08.442 { 00:28:08.442 "name": null, 00:28:08.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.442 "is_configured": false, 00:28:08.442 "data_offset": 2048, 00:28:08.442 "data_size": 63488 00:28:08.442 }, 00:28:08.442 { 00:28:08.442 "name": "BaseBdev2", 00:28:08.442 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:08.442 "is_configured": true, 00:28:08.442 "data_offset": 2048, 00:28:08.442 "data_size": 63488 00:28:08.442 }, 00:28:08.442 { 00:28:08.442 "name": "BaseBdev3", 00:28:08.442 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:08.442 "is_configured": true, 00:28:08.442 "data_offset": 2048, 00:28:08.442 "data_size": 63488 00:28:08.442 }, 00:28:08.442 { 00:28:08.442 "name": "BaseBdev4", 00:28:08.442 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:08.442 "is_configured": true, 00:28:08.442 "data_offset": 2048, 00:28:08.442 "data_size": 63488 00:28:08.442 } 00:28:08.442 ] 00:28:08.442 }' 00:28:08.442 12:47:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:08.442 12:47:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:08.442 12:47:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:08.442 12:47:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:08.442 12:47:50 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:08.701 [2024-10-01 12:47:50.989427] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:28:08.701 [2024-10-01 12:47:50.989505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:08.701 [2024-10-01 12:47:51.007007] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:28:08.701 [2024-10-01 12:47:51.017667] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:08.701 12:47:51 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:28:09.637 12:47:52 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:09.637 12:47:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:09.637 12:47:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:09.637 12:47:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:09.637 12:47:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:09.637 12:47:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.637 12:47:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:09.896 "name": "raid_bdev1", 00:28:09.896 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:09.896 "strip_size_kb": 64, 00:28:09.896 "state": "online", 00:28:09.896 "raid_level": "raid5f", 00:28:09.896 "superblock": true, 00:28:09.896 "num_base_bdevs": 4, 00:28:09.896 "num_base_bdevs_discovered": 4, 00:28:09.896 "num_base_bdevs_operational": 4, 00:28:09.896 "process": { 00:28:09.896 "type": "rebuild", 00:28:09.896 "target": "spare", 00:28:09.896 "progress": { 00:28:09.896 "blocks": 21120, 00:28:09.896 "percent": 11 00:28:09.896 } 00:28:09.896 }, 00:28:09.896 "base_bdevs_list": [ 00:28:09.896 { 00:28:09.896 "name": "spare", 00:28:09.896 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:09.896 "is_configured": true, 00:28:09.896 "data_offset": 2048, 00:28:09.896 "data_size": 63488 00:28:09.896 }, 00:28:09.896 { 00:28:09.896 "name": "BaseBdev2", 00:28:09.896 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:09.896 "is_configured": true, 00:28:09.896 "data_offset": 2048, 00:28:09.896 "data_size": 63488 00:28:09.896 }, 00:28:09.896 { 00:28:09.896 "name": "BaseBdev3", 00:28:09.896 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:09.896 "is_configured": true, 00:28:09.896 "data_offset": 2048, 00:28:09.896 "data_size": 63488 00:28:09.896 }, 00:28:09.896 { 00:28:09.896 "name": "BaseBdev4", 00:28:09.896 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:09.896 "is_configured": true, 00:28:09.896 "data_offset": 2048, 00:28:09.896 "data_size": 63488 00:28:09.896 } 00:28:09.896 ] 00:28:09.896 }' 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:28:09.896 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@657 -- # local timeout=659 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.896 12:47:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.154 12:47:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:10.154 "name": "raid_bdev1", 00:28:10.154 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:10.154 "strip_size_kb": 64, 00:28:10.154 "state": "online", 00:28:10.154 "raid_level": "raid5f", 00:28:10.154 "superblock": true, 00:28:10.154 "num_base_bdevs": 4, 00:28:10.154 "num_base_bdevs_discovered": 4, 00:28:10.154 "num_base_bdevs_operational": 4, 00:28:10.154 "process": { 00:28:10.154 "type": "rebuild", 00:28:10.154 "target": "spare", 00:28:10.155 "progress": { 00:28:10.155 "blocks": 26880, 00:28:10.155 "percent": 14 00:28:10.155 } 00:28:10.155 }, 00:28:10.155 "base_bdevs_list": [ 00:28:10.155 { 00:28:10.155 "name": "spare", 00:28:10.155 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:10.155 "is_configured": true, 00:28:10.155 "data_offset": 2048, 00:28:10.155 "data_size": 63488 00:28:10.155 }, 00:28:10.155 { 00:28:10.155 "name": "BaseBdev2", 00:28:10.155 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:10.155 "is_configured": true, 00:28:10.155 "data_offset": 2048, 00:28:10.155 "data_size": 63488 00:28:10.155 }, 00:28:10.155 { 00:28:10.155 "name": "BaseBdev3", 00:28:10.155 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:10.155 "is_configured": true, 00:28:10.155 "data_offset": 2048, 00:28:10.155 "data_size": 63488 00:28:10.155 }, 00:28:10.155 { 00:28:10.155 "name": "BaseBdev4", 00:28:10.155 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:10.155 "is_configured": true, 00:28:10.155 "data_offset": 2048, 00:28:10.155 "data_size": 63488 00:28:10.155 } 00:28:10.155 ] 00:28:10.155 }' 00:28:10.155 12:47:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:10.155 12:47:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:10.155 12:47:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:10.155 12:47:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:10.155 12:47:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:11.091 12:47:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:11.091 12:47:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:11.091 12:47:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:11.091 12:47:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:11.091 12:47:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:11.091 12:47:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:11.091 12:47:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.091 12:47:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.350 12:47:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:11.350 "name": "raid_bdev1", 00:28:11.350 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:11.350 "strip_size_kb": 64, 00:28:11.350 "state": "online", 00:28:11.350 "raid_level": "raid5f", 00:28:11.350 "superblock": true, 00:28:11.350 "num_base_bdevs": 4, 00:28:11.350 "num_base_bdevs_discovered": 4, 00:28:11.350 "num_base_bdevs_operational": 4, 00:28:11.350 "process": { 00:28:11.350 "type": "rebuild", 00:28:11.350 "target": "spare", 00:28:11.350 "progress": { 00:28:11.350 "blocks": 51840, 00:28:11.350 "percent": 27 00:28:11.350 } 00:28:11.350 }, 00:28:11.350 "base_bdevs_list": [ 00:28:11.350 { 00:28:11.350 "name": "spare", 00:28:11.350 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:11.350 "is_configured": true, 00:28:11.350 "data_offset": 2048, 00:28:11.350 "data_size": 63488 00:28:11.350 }, 00:28:11.350 { 00:28:11.350 "name": "BaseBdev2", 00:28:11.350 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:11.350 "is_configured": true, 00:28:11.350 "data_offset": 2048, 00:28:11.350 "data_size": 63488 00:28:11.350 }, 00:28:11.350 { 00:28:11.350 "name": "BaseBdev3", 00:28:11.350 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:11.350 "is_configured": true, 00:28:11.350 "data_offset": 2048, 00:28:11.350 "data_size": 63488 00:28:11.350 }, 00:28:11.350 { 00:28:11.350 "name": "BaseBdev4", 00:28:11.350 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:11.350 "is_configured": true, 00:28:11.350 "data_offset": 2048, 00:28:11.350 "data_size": 63488 00:28:11.350 } 00:28:11.350 ] 00:28:11.350 }' 00:28:11.350 12:47:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:11.350 12:47:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:11.350 12:47:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:11.609 12:47:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:11.609 12:47:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:12.543 12:47:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:12.544 12:47:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:12.544 12:47:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:12.544 12:47:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:12.544 12:47:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:12.544 12:47:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:12.544 12:47:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.544 12:47:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.801 12:47:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:12.801 "name": "raid_bdev1", 00:28:12.801 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:12.801 "strip_size_kb": 64, 00:28:12.801 "state": "online", 00:28:12.801 "raid_level": "raid5f", 00:28:12.801 "superblock": true, 00:28:12.801 "num_base_bdevs": 4, 00:28:12.801 "num_base_bdevs_discovered": 4, 00:28:12.801 "num_base_bdevs_operational": 4, 00:28:12.801 "process": { 00:28:12.801 "type": "rebuild", 00:28:12.801 "target": "spare", 00:28:12.801 "progress": { 00:28:12.801 "blocks": 76800, 00:28:12.801 "percent": 40 00:28:12.801 } 00:28:12.801 }, 00:28:12.801 "base_bdevs_list": [ 00:28:12.801 { 00:28:12.801 "name": "spare", 00:28:12.801 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:12.801 "is_configured": true, 00:28:12.801 "data_offset": 2048, 00:28:12.801 "data_size": 63488 00:28:12.801 }, 00:28:12.801 { 00:28:12.801 "name": "BaseBdev2", 00:28:12.801 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:12.801 "is_configured": true, 00:28:12.801 "data_offset": 2048, 00:28:12.801 "data_size": 63488 00:28:12.801 }, 00:28:12.801 { 00:28:12.801 "name": "BaseBdev3", 00:28:12.801 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:12.801 "is_configured": true, 00:28:12.801 "data_offset": 2048, 00:28:12.801 "data_size": 63488 00:28:12.801 }, 00:28:12.801 { 00:28:12.801 "name": "BaseBdev4", 00:28:12.801 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:12.801 "is_configured": true, 00:28:12.801 "data_offset": 2048, 00:28:12.801 "data_size": 63488 00:28:12.801 } 00:28:12.801 ] 00:28:12.801 }' 00:28:12.801 12:47:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:12.801 12:47:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:12.801 12:47:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:12.801 12:47:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:12.801 12:47:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:13.750 12:47:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:13.750 12:47:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:13.750 12:47:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:13.750 12:47:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:13.750 12:47:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:13.750 12:47:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:13.750 12:47:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.750 12:47:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.017 12:47:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:14.017 "name": "raid_bdev1", 00:28:14.017 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:14.017 "strip_size_kb": 64, 00:28:14.017 "state": "online", 00:28:14.017 "raid_level": "raid5f", 00:28:14.017 "superblock": true, 00:28:14.017 "num_base_bdevs": 4, 00:28:14.017 "num_base_bdevs_discovered": 4, 00:28:14.017 "num_base_bdevs_operational": 4, 00:28:14.017 "process": { 00:28:14.017 "type": "rebuild", 00:28:14.017 "target": "spare", 00:28:14.017 "progress": { 00:28:14.017 "blocks": 101760, 00:28:14.017 "percent": 53 00:28:14.017 } 00:28:14.017 }, 00:28:14.017 "base_bdevs_list": [ 00:28:14.017 { 00:28:14.017 "name": "spare", 00:28:14.017 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:14.017 "is_configured": true, 00:28:14.017 "data_offset": 2048, 00:28:14.017 "data_size": 63488 00:28:14.017 }, 00:28:14.017 { 00:28:14.017 "name": "BaseBdev2", 00:28:14.018 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:14.018 "is_configured": true, 00:28:14.018 "data_offset": 2048, 00:28:14.018 "data_size": 63488 00:28:14.018 }, 00:28:14.018 { 00:28:14.018 "name": "BaseBdev3", 00:28:14.018 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:14.018 "is_configured": true, 00:28:14.018 "data_offset": 2048, 00:28:14.018 "data_size": 63488 00:28:14.018 }, 00:28:14.018 { 00:28:14.018 "name": "BaseBdev4", 00:28:14.018 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:14.018 "is_configured": true, 00:28:14.018 "data_offset": 2048, 00:28:14.018 "data_size": 63488 00:28:14.018 } 00:28:14.018 ] 00:28:14.018 }' 00:28:14.018 12:47:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:14.018 12:47:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:14.018 12:47:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:14.018 12:47:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:14.018 12:47:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:15.395 "name": "raid_bdev1", 00:28:15.395 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:15.395 "strip_size_kb": 64, 00:28:15.395 "state": "online", 00:28:15.395 "raid_level": "raid5f", 00:28:15.395 "superblock": true, 00:28:15.395 "num_base_bdevs": 4, 00:28:15.395 "num_base_bdevs_discovered": 4, 00:28:15.395 "num_base_bdevs_operational": 4, 00:28:15.395 "process": { 00:28:15.395 "type": "rebuild", 00:28:15.395 "target": "spare", 00:28:15.395 "progress": { 00:28:15.395 "blocks": 126720, 00:28:15.395 "percent": 66 00:28:15.395 } 00:28:15.395 }, 00:28:15.395 "base_bdevs_list": [ 00:28:15.395 { 00:28:15.395 "name": "spare", 00:28:15.395 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:15.395 "is_configured": true, 00:28:15.395 "data_offset": 2048, 00:28:15.395 "data_size": 63488 00:28:15.395 }, 00:28:15.395 { 00:28:15.395 "name": "BaseBdev2", 00:28:15.395 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:15.395 "is_configured": true, 00:28:15.395 "data_offset": 2048, 00:28:15.395 "data_size": 63488 00:28:15.395 }, 00:28:15.395 { 00:28:15.395 "name": "BaseBdev3", 00:28:15.395 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:15.395 "is_configured": true, 00:28:15.395 "data_offset": 2048, 00:28:15.395 "data_size": 63488 00:28:15.395 }, 00:28:15.395 { 00:28:15.395 "name": "BaseBdev4", 00:28:15.395 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:15.395 "is_configured": true, 00:28:15.395 "data_offset": 2048, 00:28:15.395 "data_size": 63488 00:28:15.395 } 00:28:15.395 ] 00:28:15.395 }' 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:15.395 12:47:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:16.329 12:47:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:16.329 12:47:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:16.329 12:47:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:16.329 12:47:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:16.329 12:47:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:16.329 12:47:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:16.329 12:47:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.329 12:47:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.586 12:47:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:16.586 "name": "raid_bdev1", 00:28:16.586 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:16.586 "strip_size_kb": 64, 00:28:16.586 "state": "online", 00:28:16.586 "raid_level": "raid5f", 00:28:16.586 "superblock": true, 00:28:16.586 "num_base_bdevs": 4, 00:28:16.586 "num_base_bdevs_discovered": 4, 00:28:16.586 "num_base_bdevs_operational": 4, 00:28:16.586 "process": { 00:28:16.586 "type": "rebuild", 00:28:16.586 "target": "spare", 00:28:16.586 "progress": { 00:28:16.586 "blocks": 151680, 00:28:16.586 "percent": 79 00:28:16.586 } 00:28:16.586 }, 00:28:16.586 "base_bdevs_list": [ 00:28:16.586 { 00:28:16.586 "name": "spare", 00:28:16.586 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:16.586 "is_configured": true, 00:28:16.586 "data_offset": 2048, 00:28:16.586 "data_size": 63488 00:28:16.586 }, 00:28:16.586 { 00:28:16.586 "name": "BaseBdev2", 00:28:16.586 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:16.586 "is_configured": true, 00:28:16.586 "data_offset": 2048, 00:28:16.586 "data_size": 63488 00:28:16.586 }, 00:28:16.586 { 00:28:16.586 "name": "BaseBdev3", 00:28:16.586 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:16.586 "is_configured": true, 00:28:16.586 "data_offset": 2048, 00:28:16.586 "data_size": 63488 00:28:16.586 }, 00:28:16.586 { 00:28:16.586 "name": "BaseBdev4", 00:28:16.586 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:16.586 "is_configured": true, 00:28:16.586 "data_offset": 2048, 00:28:16.586 "data_size": 63488 00:28:16.586 } 00:28:16.586 ] 00:28:16.586 }' 00:28:16.586 12:47:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:16.586 12:47:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:16.586 12:47:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:16.844 12:47:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:16.844 12:47:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:17.781 12:48:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:17.781 12:48:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:17.781 12:48:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:17.781 12:48:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:17.781 12:48:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:17.781 12:48:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:17.781 12:48:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.781 12:48:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.040 12:48:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:18.040 "name": "raid_bdev1", 00:28:18.040 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:18.040 "strip_size_kb": 64, 00:28:18.040 "state": "online", 00:28:18.040 "raid_level": "raid5f", 00:28:18.040 "superblock": true, 00:28:18.040 "num_base_bdevs": 4, 00:28:18.040 "num_base_bdevs_discovered": 4, 00:28:18.040 "num_base_bdevs_operational": 4, 00:28:18.040 "process": { 00:28:18.040 "type": "rebuild", 00:28:18.040 "target": "spare", 00:28:18.040 "progress": { 00:28:18.040 "blocks": 176640, 00:28:18.040 "percent": 92 00:28:18.040 } 00:28:18.040 }, 00:28:18.040 "base_bdevs_list": [ 00:28:18.040 { 00:28:18.040 "name": "spare", 00:28:18.040 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:18.040 "is_configured": true, 00:28:18.040 "data_offset": 2048, 00:28:18.040 "data_size": 63488 00:28:18.040 }, 00:28:18.040 { 00:28:18.040 "name": "BaseBdev2", 00:28:18.040 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:18.040 "is_configured": true, 00:28:18.040 "data_offset": 2048, 00:28:18.040 "data_size": 63488 00:28:18.040 }, 00:28:18.040 { 00:28:18.040 "name": "BaseBdev3", 00:28:18.040 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:18.040 "is_configured": true, 00:28:18.040 "data_offset": 2048, 00:28:18.040 "data_size": 63488 00:28:18.040 }, 00:28:18.040 { 00:28:18.040 "name": "BaseBdev4", 00:28:18.040 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:18.040 "is_configured": true, 00:28:18.040 "data_offset": 2048, 00:28:18.040 "data_size": 63488 00:28:18.040 } 00:28:18.040 ] 00:28:18.040 }' 00:28:18.040 12:48:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:18.040 12:48:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:18.040 12:48:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:18.040 12:48:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:18.040 12:48:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:18.606 [2024-10-01 12:48:01.064258] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:18.606 [2024-10-01 12:48:01.064335] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:18.607 [2024-10-01 12:48:01.064557] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:19.174 "name": "raid_bdev1", 00:28:19.174 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:19.174 "strip_size_kb": 64, 00:28:19.174 "state": "online", 00:28:19.174 "raid_level": "raid5f", 00:28:19.174 "superblock": true, 00:28:19.174 "num_base_bdevs": 4, 00:28:19.174 "num_base_bdevs_discovered": 4, 00:28:19.174 "num_base_bdevs_operational": 4, 00:28:19.174 "base_bdevs_list": [ 00:28:19.174 { 00:28:19.174 "name": "spare", 00:28:19.174 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:19.174 "is_configured": true, 00:28:19.174 "data_offset": 2048, 00:28:19.174 "data_size": 63488 00:28:19.174 }, 00:28:19.174 { 00:28:19.174 "name": "BaseBdev2", 00:28:19.174 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:19.174 "is_configured": true, 00:28:19.174 "data_offset": 2048, 00:28:19.174 "data_size": 63488 00:28:19.174 }, 00:28:19.174 { 00:28:19.174 "name": "BaseBdev3", 00:28:19.174 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:19.174 "is_configured": true, 00:28:19.174 "data_offset": 2048, 00:28:19.174 "data_size": 63488 00:28:19.174 }, 00:28:19.174 { 00:28:19.174 "name": "BaseBdev4", 00:28:19.174 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:19.174 "is_configured": true, 00:28:19.174 "data_offset": 2048, 00:28:19.174 "data_size": 63488 00:28:19.174 } 00:28:19.174 ] 00:28:19.174 }' 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:19.174 12:48:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@660 -- # break 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:19.432 "name": "raid_bdev1", 00:28:19.432 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:19.432 "strip_size_kb": 64, 00:28:19.432 "state": "online", 00:28:19.432 "raid_level": "raid5f", 00:28:19.432 "superblock": true, 00:28:19.432 "num_base_bdevs": 4, 00:28:19.432 "num_base_bdevs_discovered": 4, 00:28:19.432 "num_base_bdevs_operational": 4, 00:28:19.432 "base_bdevs_list": [ 00:28:19.432 { 00:28:19.432 "name": "spare", 00:28:19.432 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:19.432 "is_configured": true, 00:28:19.432 "data_offset": 2048, 00:28:19.432 "data_size": 63488 00:28:19.432 }, 00:28:19.432 { 00:28:19.432 "name": "BaseBdev2", 00:28:19.432 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:19.432 "is_configured": true, 00:28:19.432 "data_offset": 2048, 00:28:19.432 "data_size": 63488 00:28:19.432 }, 00:28:19.432 { 00:28:19.432 "name": "BaseBdev3", 00:28:19.432 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:19.432 "is_configured": true, 00:28:19.432 "data_offset": 2048, 00:28:19.432 "data_size": 63488 00:28:19.432 }, 00:28:19.432 { 00:28:19.432 "name": "BaseBdev4", 00:28:19.432 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:19.432 "is_configured": true, 00:28:19.432 "data_offset": 2048, 00:28:19.432 "data_size": 63488 00:28:19.432 } 00:28:19.432 ] 00:28:19.432 }' 00:28:19.432 12:48:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:19.690 12:48:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:19.690 12:48:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:19.690 12:48:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.691 12:48:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.950 12:48:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:19.950 "name": "raid_bdev1", 00:28:19.950 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:19.950 "strip_size_kb": 64, 00:28:19.950 "state": "online", 00:28:19.950 "raid_level": "raid5f", 00:28:19.950 "superblock": true, 00:28:19.950 "num_base_bdevs": 4, 00:28:19.950 "num_base_bdevs_discovered": 4, 00:28:19.950 "num_base_bdevs_operational": 4, 00:28:19.950 "base_bdevs_list": [ 00:28:19.950 { 00:28:19.950 "name": "spare", 00:28:19.950 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:19.950 "is_configured": true, 00:28:19.950 "data_offset": 2048, 00:28:19.950 "data_size": 63488 00:28:19.950 }, 00:28:19.950 { 00:28:19.950 "name": "BaseBdev2", 00:28:19.950 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:19.950 "is_configured": true, 00:28:19.950 "data_offset": 2048, 00:28:19.950 "data_size": 63488 00:28:19.950 }, 00:28:19.950 { 00:28:19.950 "name": "BaseBdev3", 00:28:19.950 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:19.950 "is_configured": true, 00:28:19.950 "data_offset": 2048, 00:28:19.950 "data_size": 63488 00:28:19.950 }, 00:28:19.950 { 00:28:19.950 "name": "BaseBdev4", 00:28:19.950 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:19.950 "is_configured": true, 00:28:19.950 "data_offset": 2048, 00:28:19.950 "data_size": 63488 00:28:19.950 } 00:28:19.950 ] 00:28:19.950 }' 00:28:19.950 12:48:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:19.950 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:28:20.519 12:48:02 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:20.519 [2024-10-01 12:48:03.003256] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:20.519 [2024-10-01 12:48:03.003299] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:20.519 [2024-10-01 12:48:03.003462] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:20.519 [2024-10-01 12:48:03.003571] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:20.519 [2024-10-01 12:48:03.003581] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:28:20.519 12:48:03 -- bdev/bdev_raid.sh@671 -- # jq length 00:28:20.519 12:48:03 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.778 12:48:03 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:28:20.778 12:48:03 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:28:20.778 12:48:03 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:20.778 12:48:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:20.778 12:48:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:20.778 12:48:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:20.778 12:48:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:20.778 12:48:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:20.778 12:48:03 -- bdev/nbd_common.sh@12 -- # local i 00:28:20.778 12:48:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:20.778 12:48:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:20.778 12:48:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:21.037 /dev/nbd0 00:28:21.037 12:48:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:21.037 12:48:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:21.037 12:48:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:21.037 12:48:03 -- common/autotest_common.sh@857 -- # local i 00:28:21.037 12:48:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:21.037 12:48:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:21.037 12:48:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:21.037 12:48:03 -- common/autotest_common.sh@861 -- # break 00:28:21.037 12:48:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:21.037 12:48:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:21.037 12:48:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:21.037 1+0 records in 00:28:21.037 1+0 records out 00:28:21.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457106 s, 9.0 MB/s 00:28:21.037 12:48:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.037 12:48:03 -- common/autotest_common.sh@874 -- # size=4096 00:28:21.037 12:48:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.037 12:48:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:21.037 12:48:03 -- common/autotest_common.sh@877 -- # return 0 00:28:21.037 12:48:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:21.037 12:48:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:21.037 12:48:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:21.296 /dev/nbd1 00:28:21.296 12:48:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:21.297 12:48:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:21.297 12:48:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:28:21.297 12:48:03 -- common/autotest_common.sh@857 -- # local i 00:28:21.297 12:48:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:21.297 12:48:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:21.297 12:48:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:28:21.297 12:48:03 -- common/autotest_common.sh@861 -- # break 00:28:21.297 12:48:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:21.297 12:48:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:21.297 12:48:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:21.297 1+0 records in 00:28:21.297 1+0 records out 00:28:21.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065964 s, 6.2 MB/s 00:28:21.297 12:48:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.297 12:48:03 -- common/autotest_common.sh@874 -- # size=4096 00:28:21.297 12:48:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.297 12:48:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:21.297 12:48:03 -- common/autotest_common.sh@877 -- # return 0 00:28:21.297 12:48:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:21.297 12:48:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:21.297 12:48:03 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:21.556 12:48:03 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:21.557 12:48:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:21.557 12:48:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:21.557 12:48:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:21.557 12:48:03 -- bdev/nbd_common.sh@51 -- # local i 00:28:21.557 12:48:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:21.557 12:48:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:21.816 12:48:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:21.816 12:48:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:21.816 12:48:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:21.816 12:48:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:21.816 12:48:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:21.816 12:48:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:21.816 12:48:04 -- bdev/nbd_common.sh@41 -- # break 00:28:21.816 12:48:04 -- bdev/nbd_common.sh@45 -- # return 0 00:28:21.816 12:48:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:21.816 12:48:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:22.075 12:48:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:22.075 12:48:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:22.075 12:48:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:22.075 12:48:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:22.075 12:48:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:22.075 12:48:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:22.075 12:48:04 -- bdev/nbd_common.sh@41 -- # break 00:28:22.075 12:48:04 -- bdev/nbd_common.sh@45 -- # return 0 00:28:22.075 12:48:04 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:28:22.075 12:48:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:28:22.075 12:48:04 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:28:22.075 12:48:04 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:22.075 12:48:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:22.334 [2024-10-01 12:48:04.755589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:22.334 [2024-10-01 12:48:04.755667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.334 [2024-10-01 12:48:04.755758] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:22.334 [2024-10-01 12:48:04.755783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.334 [2024-10-01 12:48:04.758461] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.334 [2024-10-01 12:48:04.758531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:22.334 [2024-10-01 12:48:04.758637] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:22.334 [2024-10-01 12:48:04.758713] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:22.334 BaseBdev1 00:28:22.334 12:48:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:28:22.334 12:48:04 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:28:22.334 12:48:04 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:28:22.593 12:48:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:22.593 [2024-10-01 12:48:05.121397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:22.593 [2024-10-01 12:48:05.121479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.593 [2024-10-01 12:48:05.121523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:28:22.593 [2024-10-01 12:48:05.121546] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.593 [2024-10-01 12:48:05.122021] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.593 [2024-10-01 12:48:05.122081] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:22.593 [2024-10-01 12:48:05.122196] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:28:22.593 [2024-10-01 12:48:05.122208] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:28:22.593 [2024-10-01 12:48:05.122215] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:22.593 [2024-10-01 12:48:05.122250] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:28:22.593 [2024-10-01 12:48:05.122341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:22.593 BaseBdev2 00:28:22.852 12:48:05 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:28:22.852 12:48:05 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:28:22.852 12:48:05 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:28:22.852 12:48:05 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:23.112 [2024-10-01 12:48:05.452912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:23.112 [2024-10-01 12:48:05.452992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.112 [2024-10-01 12:48:05.453026] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:28:23.112 [2024-10-01 12:48:05.453052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.112 [2024-10-01 12:48:05.453544] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.112 [2024-10-01 12:48:05.453601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:23.112 [2024-10-01 12:48:05.453716] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:28:23.112 [2024-10-01 12:48:05.453741] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:23.112 BaseBdev3 00:28:23.112 12:48:05 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:28:23.112 12:48:05 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:28:23.112 12:48:05 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:28:23.371 12:48:05 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:23.371 [2024-10-01 12:48:05.852350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:23.371 [2024-10-01 12:48:05.852424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.371 [2024-10-01 12:48:05.852458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:28:23.371 [2024-10-01 12:48:05.852485] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.371 [2024-10-01 12:48:05.852943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.371 [2024-10-01 12:48:05.852997] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:23.371 [2024-10-01 12:48:05.853110] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:28:23.371 [2024-10-01 12:48:05.853133] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:23.371 BaseBdev4 00:28:23.371 12:48:05 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:23.648 12:48:06 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:23.918 [2024-10-01 12:48:06.223838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:23.918 [2024-10-01 12:48:06.223920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.918 [2024-10-01 12:48:06.223968] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:28:23.918 [2024-10-01 12:48:06.223998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.918 [2024-10-01 12:48:06.224514] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.918 [2024-10-01 12:48:06.224568] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:23.918 [2024-10-01 12:48:06.224674] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:28:23.918 [2024-10-01 12:48:06.224702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:23.918 spare 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.918 [2024-10-01 12:48:06.324640] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:28:23.918 [2024-10-01 12:48:06.324656] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:23.918 [2024-10-01 12:48:06.324790] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049510 00:28:23.918 [2024-10-01 12:48:06.332529] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:28:23.918 [2024-10-01 12:48:06.332550] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:28:23.918 [2024-10-01 12:48:06.332683] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:23.918 "name": "raid_bdev1", 00:28:23.918 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:23.918 "strip_size_kb": 64, 00:28:23.918 "state": "online", 00:28:23.918 "raid_level": "raid5f", 00:28:23.918 "superblock": true, 00:28:23.918 "num_base_bdevs": 4, 00:28:23.918 "num_base_bdevs_discovered": 4, 00:28:23.918 "num_base_bdevs_operational": 4, 00:28:23.918 "base_bdevs_list": [ 00:28:23.918 { 00:28:23.918 "name": "spare", 00:28:23.918 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:23.918 "is_configured": true, 00:28:23.918 "data_offset": 2048, 00:28:23.918 "data_size": 63488 00:28:23.918 }, 00:28:23.918 { 00:28:23.918 "name": "BaseBdev2", 00:28:23.918 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:23.918 "is_configured": true, 00:28:23.918 "data_offset": 2048, 00:28:23.918 "data_size": 63488 00:28:23.918 }, 00:28:23.918 { 00:28:23.918 "name": "BaseBdev3", 00:28:23.918 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:23.918 "is_configured": true, 00:28:23.918 "data_offset": 2048, 00:28:23.918 "data_size": 63488 00:28:23.918 }, 00:28:23.918 { 00:28:23.918 "name": "BaseBdev4", 00:28:23.918 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:23.918 "is_configured": true, 00:28:23.918 "data_offset": 2048, 00:28:23.918 "data_size": 63488 00:28:23.918 } 00:28:23.918 ] 00:28:23.918 }' 00:28:23.918 12:48:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:23.918 12:48:06 -- common/autotest_common.sh@10 -- # set +x 00:28:24.489 12:48:06 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:24.489 12:48:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:24.489 12:48:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:24.489 12:48:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:24.489 12:48:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:24.489 12:48:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.489 12:48:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.748 12:48:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:24.748 "name": "raid_bdev1", 00:28:24.748 "uuid": "9f8727f5-2fe9-4293-8c9e-9fe33c34adb7", 00:28:24.748 "strip_size_kb": 64, 00:28:24.748 "state": "online", 00:28:24.748 "raid_level": "raid5f", 00:28:24.748 "superblock": true, 00:28:24.748 "num_base_bdevs": 4, 00:28:24.748 "num_base_bdevs_discovered": 4, 00:28:24.748 "num_base_bdevs_operational": 4, 00:28:24.748 "base_bdevs_list": [ 00:28:24.748 { 00:28:24.748 "name": "spare", 00:28:24.748 "uuid": "0dea7ae1-b202-53af-b578-7ffbb3b39742", 00:28:24.748 "is_configured": true, 00:28:24.748 "data_offset": 2048, 00:28:24.748 "data_size": 63488 00:28:24.748 }, 00:28:24.748 { 00:28:24.748 "name": "BaseBdev2", 00:28:24.748 "uuid": "36f58789-39e5-5371-9505-3db0db8f16c8", 00:28:24.748 "is_configured": true, 00:28:24.748 "data_offset": 2048, 00:28:24.748 "data_size": 63488 00:28:24.748 }, 00:28:24.748 { 00:28:24.748 "name": "BaseBdev3", 00:28:24.748 "uuid": "0ce2a4aa-e34b-57c7-b504-d690fea2f2b1", 00:28:24.748 "is_configured": true, 00:28:24.748 "data_offset": 2048, 00:28:24.748 "data_size": 63488 00:28:24.748 }, 00:28:24.748 { 00:28:24.748 "name": "BaseBdev4", 00:28:24.748 "uuid": "1a00d4c5-01bd-5e2e-9313-d048f215784e", 00:28:24.748 "is_configured": true, 00:28:24.748 "data_offset": 2048, 00:28:24.748 "data_size": 63488 00:28:24.748 } 00:28:24.748 ] 00:28:24.748 }' 00:28:24.748 12:48:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:24.748 12:48:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:24.748 12:48:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:24.748 12:48:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:24.748 12:48:07 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:24.748 12:48:07 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.008 12:48:07 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:28:25.008 12:48:07 -- bdev/bdev_raid.sh@709 -- # killprocess 131803 00:28:25.008 12:48:07 -- common/autotest_common.sh@926 -- # '[' -z 131803 ']' 00:28:25.008 12:48:07 -- common/autotest_common.sh@930 -- # kill -0 131803 00:28:25.008 12:48:07 -- common/autotest_common.sh@931 -- # uname 00:28:25.008 12:48:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:25.008 12:48:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131803 00:28:25.008 12:48:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:25.008 12:48:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:25.008 killing process with pid 131803 00:28:25.008 12:48:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131803' 00:28:25.008 12:48:07 -- common/autotest_common.sh@945 -- # kill 131803 00:28:25.008 Received shutdown signal, test time was about 60.000000 seconds 00:28:25.008 00:28:25.008 Latency(us) 00:28:25.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.008 =================================================================================================================== 00:28:25.008 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:25.008 [2024-10-01 12:48:07.501711] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:25.008 [2024-10-01 12:48:07.501789] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:25.008 [2024-10-01 12:48:07.501870] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:25.008 [2024-10-01 12:48:07.501880] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:28:25.008 12:48:07 -- common/autotest_common.sh@950 -- # wait 131803 00:28:25.578 [2024-10-01 12:48:08.015419] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:26.959 12:48:09 -- bdev/bdev_raid.sh@711 -- # return 0 00:28:26.959 00:28:26.959 real 0m27.659s 00:28:26.959 user 0m39.881s 00:28:26.959 sys 0m3.871s 00:28:26.959 12:48:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.959 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:28:26.959 ************************************ 00:28:26.959 END TEST raid5f_rebuild_test_sb 00:28:26.959 ************************************ 00:28:27.220 12:48:09 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:28:27.220 ************************************ 00:28:27.220 END TEST bdev_raid 00:28:27.220 ************************************ 00:28:27.220 00:28:27.220 real 10m46.358s 00:28:27.220 user 16m47.389s 00:28:27.220 sys 1m47.624s 00:28:27.220 12:48:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.220 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:28:27.220 12:48:09 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:28:27.220 12:48:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:27.220 12:48:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.220 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:28:27.220 ************************************ 00:28:27.220 START TEST bdevperf_config 00:28:27.220 ************************************ 00:28:27.220 12:48:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:28:27.220 * Looking for test storage... 00:28:27.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:28:27.220 12:48:09 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:28:27.220 12:48:09 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:28:27.220 12:48:09 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:28:27.220 12:48:09 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:27.220 12:48:09 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:27.220 12:48:09 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:28:27.220 12:48:09 -- bdevperf/common.sh@8 -- # local job_section=global 00:28:27.220 12:48:09 -- bdevperf/common.sh@9 -- # local rw=read 00:28:27.220 12:48:09 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:27.220 12:48:09 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:28:27.220 12:48:09 -- bdevperf/common.sh@13 -- # cat 00:28:27.220 12:48:09 -- bdevperf/common.sh@18 -- # job='[global]' 00:28:27.220 00:28:27.220 12:48:09 -- bdevperf/common.sh@19 -- # echo 00:28:27.220 12:48:09 -- bdevperf/common.sh@20 -- # cat 00:28:27.220 12:48:09 -- bdevperf/test_config.sh@18 -- # create_job job0 00:28:27.220 12:48:09 -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:27.220 12:48:09 -- bdevperf/common.sh@9 -- # local rw= 00:28:27.220 12:48:09 -- bdevperf/common.sh@10 -- # local filename= 00:28:27.220 12:48:09 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:27.220 12:48:09 -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:27.220 00:28:27.220 12:48:09 -- bdevperf/common.sh@19 -- # echo 00:28:27.220 12:48:09 -- bdevperf/common.sh@20 -- # cat 00:28:27.220 12:48:09 -- bdevperf/test_config.sh@19 -- # create_job job1 00:28:27.220 12:48:09 -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:27.220 12:48:09 -- bdevperf/common.sh@9 -- # local rw= 00:28:27.220 12:48:09 -- bdevperf/common.sh@10 -- # local filename= 00:28:27.220 12:48:09 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:27.220 12:48:09 -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:27.220 00:28:27.220 12:48:09 -- bdevperf/common.sh@19 -- # echo 00:28:27.220 12:48:09 -- bdevperf/common.sh@20 -- # cat 00:28:27.220 12:48:09 -- bdevperf/test_config.sh@20 -- # create_job job2 00:28:27.220 12:48:09 -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:27.220 12:48:09 -- bdevperf/common.sh@9 -- # local rw= 00:28:27.220 12:48:09 -- bdevperf/common.sh@10 -- # local filename= 00:28:27.220 12:48:09 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:27.220 12:48:09 -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:27.220 12:48:09 -- bdevperf/common.sh@19 -- # echo 00:28:27.220 00:28:27.481 12:48:09 -- bdevperf/common.sh@20 -- # cat 00:28:27.481 12:48:09 -- bdevperf/test_config.sh@21 -- # create_job job3 00:28:27.481 12:48:09 -- bdevperf/common.sh@8 -- # local job_section=job3 00:28:27.481 12:48:09 -- bdevperf/common.sh@9 -- # local rw= 00:28:27.481 12:48:09 -- bdevperf/common.sh@10 -- # local filename= 00:28:27.481 12:48:09 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:28:27.481 12:48:09 -- bdevperf/common.sh@18 -- # job='[job3]' 00:28:27.481 00:28:27.481 12:48:09 -- bdevperf/common.sh@19 -- # echo 00:28:27.481 12:48:09 -- bdevperf/common.sh@20 -- # cat 00:28:27.481 12:48:09 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:32.765 12:48:14 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-10-01 12:48:09.834864] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:32.765 [2024-10-01 12:48:09.835012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132549 ] 00:28:32.765 Using job config with 4 jobs 00:28:32.765 [2024-10-01 12:48:10.003104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.765 [2024-10-01 12:48:10.283706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.765 cpumask for '\''job0'\'' is too big 00:28:32.765 cpumask for '\''job1'\'' is too big 00:28:32.765 cpumask for '\''job2'\'' is too big 00:28:32.765 cpumask for '\''job3'\'' is too big 00:28:32.765 Running I/O for 2 seconds... 00:28:32.765 00:28:32.765 Latency(us) 00:28:32.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.01 36786.36 35.92 0.00 0.00 6953.37 1302.82 10791.07 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.01 36764.52 35.90 0.00 0.00 6946.81 1230.44 9527.72 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.01 36743.05 35.88 0.00 0.00 6940.14 1263.34 8264.38 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.02 36814.53 35.95 0.00 0.00 6917.44 667.86 7474.79 00:28:32.765 =================================================================================================================== 00:28:32.765 Total : 147108.46 143.66 0.00 0.00 6939.42 667.86 10791.07' 00:28:32.765 12:48:14 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-10-01 12:48:09.834864] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:32.765 [2024-10-01 12:48:09.835012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132549 ] 00:28:32.765 Using job config with 4 jobs 00:28:32.765 [2024-10-01 12:48:10.003104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.765 [2024-10-01 12:48:10.283706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.765 cpumask for '\''job0'\'' is too big 00:28:32.765 cpumask for '\''job1'\'' is too big 00:28:32.765 cpumask for '\''job2'\'' is too big 00:28:32.765 cpumask for '\''job3'\'' is too big 00:28:32.765 Running I/O for 2 seconds... 00:28:32.765 00:28:32.765 Latency(us) 00:28:32.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.01 36786.36 35.92 0.00 0.00 6953.37 1302.82 10791.07 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.01 36764.52 35.90 0.00 0.00 6946.81 1230.44 9527.72 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.01 36743.05 35.88 0.00 0.00 6940.14 1263.34 8264.38 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.02 36814.53 35.95 0.00 0.00 6917.44 667.86 7474.79 00:28:32.765 =================================================================================================================== 00:28:32.765 Total : 147108.46 143.66 0.00 0.00 6939.42 667.86 10791.07' 00:28:32.765 12:48:14 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:32.765 12:48:14 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:32.765 12:48:14 -- bdevperf/common.sh@32 -- # echo '[2024-10-01 12:48:09.834864] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:32.765 [2024-10-01 12:48:09.835012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132549 ] 00:28:32.765 Using job config with 4 jobs 00:28:32.765 [2024-10-01 12:48:10.003104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.765 [2024-10-01 12:48:10.283706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.765 cpumask for '\''job0'\'' is too big 00:28:32.765 cpumask for '\''job1'\'' is too big 00:28:32.765 cpumask for '\''job2'\'' is too big 00:28:32.765 cpumask for '\''job3'\'' is too big 00:28:32.765 Running I/O for 2 seconds... 00:28:32.765 00:28:32.765 Latency(us) 00:28:32.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.01 36786.36 35.92 0.00 0.00 6953.37 1302.82 10791.07 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.01 36764.52 35.90 0.00 0.00 6946.81 1230.44 9527.72 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.01 36743.05 35.88 0.00 0.00 6940.14 1263.34 8264.38 00:28:32.765 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:32.765 Malloc0 : 2.02 36814.53 35.95 0.00 0.00 6917.44 667.86 7474.79 00:28:32.765 =================================================================================================================== 00:28:32.765 Total : 147108.46 143.66 0.00 0.00 6939.42 667.86 10791.07' 00:28:32.765 12:48:14 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:28:32.765 12:48:14 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:32.765 [2024-10-01 12:48:14.698210] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:32.765 [2024-10-01 12:48:14.698339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132616 ] 00:28:32.765 [2024-10-01 12:48:14.864736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.765 [2024-10-01 12:48:15.130836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.336 cpumask for 'job0' is too big 00:28:33.337 cpumask for 'job1' is too big 00:28:33.337 cpumask for 'job2' is too big 00:28:33.337 cpumask for 'job3' is too big 00:28:37.569 12:48:19 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:28:37.569 Running I/O for 2 seconds... 00:28:37.569 00:28:37.569 Latency(us) 00:28:37.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.569 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:37.569 Malloc0 : 2.01 36724.36 35.86 0.00 0.00 6965.60 1434.42 13423.04 00:28:37.569 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:37.569 Malloc0 : 2.01 36732.67 35.87 0.00 0.00 6951.91 1506.80 11896.49 00:28:37.569 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:37.569 Malloc0 : 2.02 36711.09 35.85 0.00 0.00 6943.64 1447.58 9843.56 00:28:37.569 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:37.569 Malloc0 : 2.02 36689.53 35.83 0.00 0.00 6936.02 1414.68 8159.10 00:28:37.569 =================================================================================================================== 00:28:37.569 Total : 146857.64 143.42 0.00 0.00 6949.28 1414.68 13423.04' 00:28:37.569 12:48:19 -- bdevperf/test_config.sh@27 -- # cleanup 00:28:37.569 12:48:19 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:37.569 12:48:19 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:28:37.569 12:48:19 -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:37.569 12:48:19 -- bdevperf/common.sh@9 -- # local rw=write 00:28:37.569 12:48:19 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:37.569 12:48:19 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:37.569 12:48:19 -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:37.569 00:28:37.569 12:48:19 -- bdevperf/common.sh@19 -- # echo 00:28:37.569 12:48:19 -- bdevperf/common.sh@20 -- # cat 00:28:37.569 12:48:19 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:28:37.569 12:48:19 -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:37.569 12:48:19 -- bdevperf/common.sh@9 -- # local rw=write 00:28:37.569 12:48:19 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:37.569 12:48:19 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:37.569 12:48:19 -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:37.569 12:48:19 -- bdevperf/common.sh@19 -- # echo 00:28:37.569 00:28:37.569 12:48:19 -- bdevperf/common.sh@20 -- # cat 00:28:37.569 12:48:19 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:28:37.569 12:48:19 -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:37.569 12:48:19 -- bdevperf/common.sh@9 -- # local rw=write 00:28:37.569 12:48:19 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:37.569 12:48:19 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:37.569 00:28:37.569 12:48:19 -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:37.569 12:48:19 -- bdevperf/common.sh@19 -- # echo 00:28:37.569 12:48:19 -- bdevperf/common.sh@20 -- # cat 00:28:37.569 12:48:19 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:42.839 12:48:24 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-10-01 12:48:19.587339] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:42.839 [2024-10-01 12:48:19.587516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132674 ] 00:28:42.839 Using job config with 3 jobs 00:28:42.839 [2024-10-01 12:48:19.756900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.839 [2024-10-01 12:48:20.023668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.839 cpumask for '\''job0'\'' is too big 00:28:42.839 cpumask for '\''job1'\'' is too big 00:28:42.839 cpumask for '\''job2'\'' is too big 00:28:42.839 Running I/O for 2 seconds... 00:28:42.839 00:28:42.839 Latency(us) 00:28:42.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:42.839 Malloc0 : 2.01 47835.04 46.71 0.00 0.00 5346.98 1302.82 8106.46 00:28:42.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:42.839 Malloc0 : 2.01 47851.44 46.73 0.00 0.00 5336.70 1210.71 8053.82 00:28:42.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:42.839 Malloc0 : 2.01 47821.63 46.70 0.00 0.00 5332.11 1237.02 8053.82 00:28:42.839 =================================================================================================================== 00:28:42.839 Total : 143508.11 140.14 0.00 0.00 5338.59 1210.71 8106.46' 00:28:42.839 12:48:24 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-10-01 12:48:19.587339] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:42.839 [2024-10-01 12:48:19.587516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132674 ] 00:28:42.839 Using job config with 3 jobs 00:28:42.839 [2024-10-01 12:48:19.756900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.839 [2024-10-01 12:48:20.023668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.839 cpumask for '\''job0'\'' is too big 00:28:42.839 cpumask for '\''job1'\'' is too big 00:28:42.839 cpumask for '\''job2'\'' is too big 00:28:42.839 Running I/O for 2 seconds... 00:28:42.839 00:28:42.839 Latency(us) 00:28:42.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:42.839 Malloc0 : 2.01 47835.04 46.71 0.00 0.00 5346.98 1302.82 8106.46 00:28:42.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:42.839 Malloc0 : 2.01 47851.44 46.73 0.00 0.00 5336.70 1210.71 8053.82 00:28:42.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:42.839 Malloc0 : 2.01 47821.63 46.70 0.00 0.00 5332.11 1237.02 8053.82 00:28:42.839 =================================================================================================================== 00:28:42.839 Total : 143508.11 140.14 0.00 0.00 5338.59 1210.71 8106.46' 00:28:42.839 12:48:24 -- bdevperf/common.sh@32 -- # echo '[2024-10-01 12:48:19.587339] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:42.839 [2024-10-01 12:48:19.587516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132674 ] 00:28:42.839 Using job config with 3 jobs 00:28:42.839 [2024-10-01 12:48:19.756900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.839 [2024-10-01 12:48:20.023668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.839 cpumask for '\''job0'\'' is too big 00:28:42.839 cpumask for '\''job1'\'' is too big 00:28:42.839 cpumask for '\''job2'\'' is too big 00:28:42.839 Running I/O for 2 seconds... 00:28:42.839 00:28:42.839 Latency(us) 00:28:42.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:42.839 Malloc0 : 2.01 47835.04 46.71 0.00 0.00 5346.98 1302.82 8106.46 00:28:42.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:42.839 Malloc0 : 2.01 47851.44 46.73 0.00 0.00 5336.70 1210.71 8053.82 00:28:42.839 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:42.839 Malloc0 : 2.01 47821.63 46.70 0.00 0.00 5332.11 1237.02 8053.82 00:28:42.839 =================================================================================================================== 00:28:42.839 Total : 143508.11 140.14 0.00 0.00 5338.59 1210.71 8106.46' 00:28:42.839 12:48:24 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:42.839 12:48:24 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:42.839 12:48:24 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:28:42.839 12:48:24 -- bdevperf/test_config.sh@35 -- # cleanup 00:28:42.839 12:48:24 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:42.839 12:48:24 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:28:42.839 12:48:24 -- bdevperf/common.sh@8 -- # local job_section=global 00:28:42.839 12:48:24 -- bdevperf/common.sh@9 -- # local rw=rw 00:28:42.839 12:48:24 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:28:42.840 12:48:24 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:28:42.840 12:48:24 -- bdevperf/common.sh@13 -- # cat 00:28:42.840 12:48:24 -- bdevperf/common.sh@18 -- # job='[global]' 00:28:42.840 00:28:42.840 12:48:24 -- bdevperf/common.sh@19 -- # echo 00:28:42.840 12:48:24 -- bdevperf/common.sh@20 -- # cat 00:28:42.840 12:48:24 -- bdevperf/test_config.sh@38 -- # create_job job0 00:28:42.840 12:48:24 -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:42.840 12:48:24 -- bdevperf/common.sh@9 -- # local rw= 00:28:42.840 12:48:24 -- bdevperf/common.sh@10 -- # local filename= 00:28:42.840 12:48:24 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:42.840 12:48:24 -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:42.840 00:28:42.840 12:48:24 -- bdevperf/common.sh@19 -- # echo 00:28:42.840 12:48:24 -- bdevperf/common.sh@20 -- # cat 00:28:42.840 12:48:24 -- bdevperf/test_config.sh@39 -- # create_job job1 00:28:42.840 12:48:24 -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:42.840 12:48:24 -- bdevperf/common.sh@9 -- # local rw= 00:28:42.840 12:48:24 -- bdevperf/common.sh@10 -- # local filename= 00:28:42.840 12:48:24 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:42.840 12:48:24 -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:42.840 00:28:42.840 12:48:24 -- bdevperf/common.sh@19 -- # echo 00:28:42.840 12:48:24 -- bdevperf/common.sh@20 -- # cat 00:28:42.840 12:48:24 -- bdevperf/test_config.sh@40 -- # create_job job2 00:28:42.840 12:48:24 -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:42.840 12:48:24 -- bdevperf/common.sh@9 -- # local rw= 00:28:42.840 12:48:24 -- bdevperf/common.sh@10 -- # local filename= 00:28:42.840 12:48:24 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:42.840 12:48:24 -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:42.840 00:28:42.840 12:48:24 -- bdevperf/common.sh@19 -- # echo 00:28:42.840 12:48:24 -- bdevperf/common.sh@20 -- # cat 00:28:42.840 12:48:24 -- bdevperf/test_config.sh@41 -- # create_job job3 00:28:42.840 12:48:24 -- bdevperf/common.sh@8 -- # local job_section=job3 00:28:42.840 12:48:24 -- bdevperf/common.sh@9 -- # local rw= 00:28:42.840 12:48:24 -- bdevperf/common.sh@10 -- # local filename= 00:28:42.840 12:48:24 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:28:42.840 12:48:24 -- bdevperf/common.sh@18 -- # job='[job3]' 00:28:42.840 00:28:42.840 12:48:24 -- bdevperf/common.sh@19 -- # echo 00:28:42.840 12:48:24 -- bdevperf/common.sh@20 -- # cat 00:28:42.840 12:48:24 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:47.065 12:48:29 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-10-01 12:48:24.509052] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:47.065 [2024-10-01 12:48:24.509217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132744 ] 00:28:47.065 Using job config with 4 jobs 00:28:47.065 [2024-10-01 12:48:24.675192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.065 [2024-10-01 12:48:24.940355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.065 cpumask for '\''job0'\'' is too big 00:28:47.065 cpumask for '\''job1'\'' is too big 00:28:47.065 cpumask for '\''job2'\'' is too big 00:28:47.065 cpumask for '\''job3'\'' is too big 00:28:47.065 Running I/O for 2 seconds... 00:28:47.065 00:28:47.065 Latency(us) 00:28:47.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17694.16 17.28 0.00 0.00 14456.89 2671.45 22108.53 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17683.20 17.27 0.00 0.00 14460.56 3184.68 22108.53 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17672.92 17.26 0.00 0.00 14433.67 2566.17 19476.56 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17662.34 17.25 0.00 0.00 14432.10 3066.24 19476.56 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17651.26 17.24 0.00 0.00 14405.53 2684.61 16844.59 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17640.80 17.23 0.00 0.00 14404.01 3224.16 16739.32 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17630.65 17.22 0.00 0.00 14378.26 2579.33 15054.86 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17619.66 17.21 0.00 0.00 14376.48 3105.72 15054.86 00:28:47.065 =================================================================================================================== 00:28:47.065 Total : 141255.00 137.94 0.00 0.00 14418.44 2566.17 22108.53' 00:28:47.065 12:48:29 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-10-01 12:48:24.509052] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:47.065 [2024-10-01 12:48:24.509217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132744 ] 00:28:47.065 Using job config with 4 jobs 00:28:47.065 [2024-10-01 12:48:24.675192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.065 [2024-10-01 12:48:24.940355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.065 cpumask for '\''job0'\'' is too big 00:28:47.065 cpumask for '\''job1'\'' is too big 00:28:47.065 cpumask for '\''job2'\'' is too big 00:28:47.065 cpumask for '\''job3'\'' is too big 00:28:47.065 Running I/O for 2 seconds... 00:28:47.065 00:28:47.065 Latency(us) 00:28:47.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17694.16 17.28 0.00 0.00 14456.89 2671.45 22108.53 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17683.20 17.27 0.00 0.00 14460.56 3184.68 22108.53 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17672.92 17.26 0.00 0.00 14433.67 2566.17 19476.56 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17662.34 17.25 0.00 0.00 14432.10 3066.24 19476.56 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17651.26 17.24 0.00 0.00 14405.53 2684.61 16844.59 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17640.80 17.23 0.00 0.00 14404.01 3224.16 16739.32 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17630.65 17.22 0.00 0.00 14378.26 2579.33 15054.86 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17619.66 17.21 0.00 0.00 14376.48 3105.72 15054.86 00:28:47.065 =================================================================================================================== 00:28:47.065 Total : 141255.00 137.94 0.00 0.00 14418.44 2566.17 22108.53' 00:28:47.065 12:48:29 -- bdevperf/common.sh@32 -- # echo '[2024-10-01 12:48:24.509052] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:47.065 [2024-10-01 12:48:24.509217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132744 ] 00:28:47.065 Using job config with 4 jobs 00:28:47.065 [2024-10-01 12:48:24.675192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.065 [2024-10-01 12:48:24.940355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.065 cpumask for '\''job0'\'' is too big 00:28:47.065 cpumask for '\''job1'\'' is too big 00:28:47.065 cpumask for '\''job2'\'' is too big 00:28:47.065 cpumask for '\''job3'\'' is too big 00:28:47.065 Running I/O for 2 seconds... 00:28:47.065 00:28:47.065 Latency(us) 00:28:47.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17694.16 17.28 0.00 0.00 14456.89 2671.45 22108.53 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17683.20 17.27 0.00 0.00 14460.56 3184.68 22108.53 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17672.92 17.26 0.00 0.00 14433.67 2566.17 19476.56 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17662.34 17.25 0.00 0.00 14432.10 3066.24 19476.56 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17651.26 17.24 0.00 0.00 14405.53 2684.61 16844.59 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17640.80 17.23 0.00 0.00 14404.01 3224.16 16739.32 00:28:47.065 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc0 : 2.03 17630.65 17.22 0.00 0.00 14378.26 2579.33 15054.86 00:28:47.065 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:47.065 Malloc1 : 2.03 17619.66 17.21 0.00 0.00 14376.48 3105.72 15054.86 00:28:47.065 =================================================================================================================== 00:28:47.066 Total : 141255.00 137.94 0.00 0.00 14418.44 2566.17 22108.53' 00:28:47.066 12:48:29 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:47.066 12:48:29 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:47.066 12:48:29 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:28:47.066 12:48:29 -- bdevperf/test_config.sh@44 -- # cleanup 00:28:47.066 12:48:29 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:47.066 12:48:29 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:47.066 ************************************ 00:28:47.066 END TEST bdevperf_config 00:28:47.066 ************************************ 00:28:47.066 00:28:47.066 real 0m19.761s 00:28:47.066 user 0m17.646s 00:28:47.066 sys 0m1.566s 00:28:47.066 12:48:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.066 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:28:47.066 12:48:29 -- spdk/autotest.sh@198 -- # uname -s 00:28:47.066 12:48:29 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:28:47.066 12:48:29 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:28:47.066 12:48:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:47.066 12:48:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:47.066 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:28:47.066 ************************************ 00:28:47.066 START TEST reactor_set_interrupt 00:28:47.066 ************************************ 00:28:47.066 12:48:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:28:47.066 * Looking for test storage... 00:28:47.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:28:47.066 12:48:29 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:28:47.066 12:48:29 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:28:47.066 12:48:29 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:28:47.066 12:48:29 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:28:47.066 12:48:29 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:28:47.066 12:48:29 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:47.066 12:48:29 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:28:47.066 12:48:29 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:28:47.066 12:48:29 -- common/autotest_common.sh@34 -- # set -e 00:28:47.066 12:48:29 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:28:47.066 12:48:29 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:28:47.066 12:48:29 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:28:47.066 12:48:29 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:28:47.066 12:48:29 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:28:47.066 12:48:29 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:28:47.066 12:48:29 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:28:47.066 12:48:29 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:28:47.066 12:48:29 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:28:47.327 12:48:29 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:28:47.327 12:48:29 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:28:47.327 12:48:29 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:28:47.327 12:48:29 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:28:47.327 12:48:29 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:28:47.327 12:48:29 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:28:47.327 12:48:29 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:28:47.327 12:48:29 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:28:47.327 12:48:29 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:28:47.327 12:48:29 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:28:47.327 12:48:29 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:28:47.327 12:48:29 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:28:47.327 12:48:29 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:28:47.327 12:48:29 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:28:47.327 12:48:29 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:28:47.327 12:48:29 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:28:47.328 12:48:29 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:28:47.328 12:48:29 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:28:47.328 12:48:29 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:28:47.328 12:48:29 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:28:47.328 12:48:29 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:28:47.328 12:48:29 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:28:47.328 12:48:29 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:28:47.328 12:48:29 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:28:47.328 12:48:29 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:28:47.328 12:48:29 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:28:47.328 12:48:29 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:28:47.328 12:48:29 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:28:47.328 12:48:29 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:28:47.328 12:48:29 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:28:47.328 12:48:29 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:28:47.328 12:48:29 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:28:47.328 12:48:29 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:28:47.328 12:48:29 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:28:47.328 12:48:29 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:28:47.328 12:48:29 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:28:47.328 12:48:29 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:28:47.328 12:48:29 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:28:47.328 12:48:29 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:28:47.328 12:48:29 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:28:47.328 12:48:29 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:28:47.328 12:48:29 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:28:47.328 12:48:29 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:28:47.328 12:48:29 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:28:47.328 12:48:29 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:28:47.328 12:48:29 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:28:47.328 12:48:29 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:28:47.328 12:48:29 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:28:47.328 12:48:29 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:28:47.328 12:48:29 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:28:47.328 12:48:29 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:28:47.328 12:48:29 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:28:47.328 12:48:29 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:28:47.328 12:48:29 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:28:47.328 12:48:29 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:28:47.328 12:48:29 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:28:47.328 12:48:29 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:28:47.328 12:48:29 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:28:47.328 12:48:29 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:28:47.328 12:48:29 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:28:47.328 12:48:29 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:28:47.328 12:48:29 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:28:47.328 12:48:29 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:28:47.328 12:48:29 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:28:47.328 12:48:29 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:28:47.328 12:48:29 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:28:47.328 12:48:29 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:28:47.328 12:48:29 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:28:47.328 12:48:29 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:28:47.328 12:48:29 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:28:47.328 12:48:29 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:28:47.328 12:48:29 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:28:47.328 12:48:29 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:28:47.328 12:48:29 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:28:47.328 12:48:29 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:28:47.328 12:48:29 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:28:47.328 12:48:29 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:28:47.328 12:48:29 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:28:47.328 12:48:29 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:28:47.328 12:48:29 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:28:47.328 12:48:29 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:28:47.328 12:48:29 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:28:47.328 12:48:29 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:28:47.328 12:48:29 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:28:47.328 12:48:29 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:28:47.328 12:48:29 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:28:47.328 12:48:29 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:28:47.328 12:48:29 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:28:47.328 12:48:29 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:28:47.328 12:48:29 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:28:47.328 #define SPDK_CONFIG_H 00:28:47.328 #define SPDK_CONFIG_APPS 1 00:28:47.328 #define SPDK_CONFIG_ARCH native 00:28:47.328 #define SPDK_CONFIG_ASAN 1 00:28:47.328 #undef SPDK_CONFIG_AVAHI 00:28:47.328 #undef SPDK_CONFIG_CET 00:28:47.328 #define SPDK_CONFIG_COVERAGE 1 00:28:47.328 #define SPDK_CONFIG_CROSS_PREFIX 00:28:47.328 #undef SPDK_CONFIG_CRYPTO 00:28:47.328 #undef SPDK_CONFIG_CRYPTO_MLX5 00:28:47.328 #undef SPDK_CONFIG_CUSTOMOCF 00:28:47.328 #undef SPDK_CONFIG_DAOS 00:28:47.328 #define SPDK_CONFIG_DAOS_DIR 00:28:47.328 #define SPDK_CONFIG_DEBUG 1 00:28:47.328 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:28:47.328 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:28:47.328 #define SPDK_CONFIG_DPDK_INC_DIR 00:28:47.328 #define SPDK_CONFIG_DPDK_LIB_DIR 00:28:47.328 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:28:47.328 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:28:47.328 #define SPDK_CONFIG_EXAMPLES 1 00:28:47.328 #undef SPDK_CONFIG_FC 00:28:47.328 #define SPDK_CONFIG_FC_PATH 00:28:47.328 #define SPDK_CONFIG_FIO_PLUGIN 1 00:28:47.328 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:28:47.328 #undef SPDK_CONFIG_FUSE 00:28:47.328 #undef SPDK_CONFIG_FUZZER 00:28:47.328 #define SPDK_CONFIG_FUZZER_LIB 00:28:47.328 #undef SPDK_CONFIG_GOLANG 00:28:47.328 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:28:47.328 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:28:47.328 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:28:47.328 #undef SPDK_CONFIG_HAVE_LIBBSD 00:28:47.328 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:28:47.328 #define SPDK_CONFIG_IDXD 1 00:28:47.328 #undef SPDK_CONFIG_IDXD_KERNEL 00:28:47.328 #undef SPDK_CONFIG_IPSEC_MB 00:28:47.328 #define SPDK_CONFIG_IPSEC_MB_DIR 00:28:47.328 #define SPDK_CONFIG_ISAL 1 00:28:47.328 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:28:47.328 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:28:47.328 #define SPDK_CONFIG_LIBDIR 00:28:47.328 #undef SPDK_CONFIG_LTO 00:28:47.328 #define SPDK_CONFIG_MAX_LCORES 00:28:47.328 #define SPDK_CONFIG_NVME_CUSE 1 00:28:47.328 #undef SPDK_CONFIG_OCF 00:28:47.328 #define SPDK_CONFIG_OCF_PATH 00:28:47.328 #define SPDK_CONFIG_OPENSSL_PATH 00:28:47.328 #undef SPDK_CONFIG_PGO_CAPTURE 00:28:47.328 #undef SPDK_CONFIG_PGO_USE 00:28:47.328 #define SPDK_CONFIG_PREFIX /usr/local 00:28:47.328 #define SPDK_CONFIG_RAID5F 1 00:28:47.328 #undef SPDK_CONFIG_RBD 00:28:47.328 #define SPDK_CONFIG_RDMA 1 00:28:47.328 #define SPDK_CONFIG_RDMA_PROV verbs 00:28:47.328 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:28:47.328 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:28:47.328 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:28:47.328 #undef SPDK_CONFIG_SHARED 00:28:47.328 #undef SPDK_CONFIG_SMA 00:28:47.328 #define SPDK_CONFIG_TESTS 1 00:28:47.328 #undef SPDK_CONFIG_TSAN 00:28:47.328 #undef SPDK_CONFIG_UBLK 00:28:47.328 #define SPDK_CONFIG_UBSAN 1 00:28:47.328 #define SPDK_CONFIG_UNIT_TESTS 1 00:28:47.328 #undef SPDK_CONFIG_URING 00:28:47.328 #define SPDK_CONFIG_URING_PATH 00:28:47.328 #undef SPDK_CONFIG_URING_ZNS 00:28:47.328 #undef SPDK_CONFIG_USDT 00:28:47.328 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:28:47.328 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:28:47.328 #undef SPDK_CONFIG_VFIO_USER 00:28:47.328 #define SPDK_CONFIG_VFIO_USER_DIR 00:28:47.328 #define SPDK_CONFIG_VHOST 1 00:28:47.328 #define SPDK_CONFIG_VIRTIO 1 00:28:47.328 #undef SPDK_CONFIG_VTUNE 00:28:47.328 #define SPDK_CONFIG_VTUNE_DIR 00:28:47.328 #define SPDK_CONFIG_WERROR 1 00:28:47.328 #define SPDK_CONFIG_WPDK_DIR 00:28:47.328 #undef SPDK_CONFIG_XNVME 00:28:47.328 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:28:47.328 12:48:29 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:28:47.328 12:48:29 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:47.328 12:48:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.328 12:48:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.328 12:48:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.328 12:48:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:47.328 12:48:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:47.328 12:48:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:47.328 12:48:29 -- paths/export.sh@5 -- # export PATH 00:28:47.329 12:48:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:47.329 12:48:29 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:28:47.329 12:48:29 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:28:47.329 12:48:29 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:28:47.329 12:48:29 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:28:47.329 12:48:29 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:28:47.329 12:48:29 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:28:47.329 12:48:29 -- pm/common@16 -- # TEST_TAG=N/A 00:28:47.329 12:48:29 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:28:47.329 12:48:29 -- common/autotest_common.sh@52 -- # : 1 00:28:47.329 12:48:29 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:28:47.329 12:48:29 -- common/autotest_common.sh@56 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:28:47.329 12:48:29 -- common/autotest_common.sh@58 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:28:47.329 12:48:29 -- common/autotest_common.sh@60 -- # : 1 00:28:47.329 12:48:29 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:28:47.329 12:48:29 -- common/autotest_common.sh@62 -- # : 1 00:28:47.329 12:48:29 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:28:47.329 12:48:29 -- common/autotest_common.sh@64 -- # : 00:28:47.329 12:48:29 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:28:47.329 12:48:29 -- common/autotest_common.sh@66 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:28:47.329 12:48:29 -- common/autotest_common.sh@68 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:28:47.329 12:48:29 -- common/autotest_common.sh@70 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:28:47.329 12:48:29 -- common/autotest_common.sh@72 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:28:47.329 12:48:29 -- common/autotest_common.sh@74 -- # : 1 00:28:47.329 12:48:29 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:28:47.329 12:48:29 -- common/autotest_common.sh@76 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:28:47.329 12:48:29 -- common/autotest_common.sh@78 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:28:47.329 12:48:29 -- common/autotest_common.sh@80 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:28:47.329 12:48:29 -- common/autotest_common.sh@82 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:28:47.329 12:48:29 -- common/autotest_common.sh@84 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:28:47.329 12:48:29 -- common/autotest_common.sh@86 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:28:47.329 12:48:29 -- common/autotest_common.sh@88 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:28:47.329 12:48:29 -- common/autotest_common.sh@90 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:28:47.329 12:48:29 -- common/autotest_common.sh@92 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:28:47.329 12:48:29 -- common/autotest_common.sh@94 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:28:47.329 12:48:29 -- common/autotest_common.sh@96 -- # : rdma 00:28:47.329 12:48:29 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:28:47.329 12:48:29 -- common/autotest_common.sh@98 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:28:47.329 12:48:29 -- common/autotest_common.sh@100 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:28:47.329 12:48:29 -- common/autotest_common.sh@102 -- # : 1 00:28:47.329 12:48:29 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:28:47.329 12:48:29 -- common/autotest_common.sh@104 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:28:47.329 12:48:29 -- common/autotest_common.sh@106 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:28:47.329 12:48:29 -- common/autotest_common.sh@108 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:28:47.329 12:48:29 -- common/autotest_common.sh@110 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:28:47.329 12:48:29 -- common/autotest_common.sh@112 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:28:47.329 12:48:29 -- common/autotest_common.sh@114 -- # : 1 00:28:47.329 12:48:29 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:28:47.329 12:48:29 -- common/autotest_common.sh@116 -- # : 1 00:28:47.329 12:48:29 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:28:47.329 12:48:29 -- common/autotest_common.sh@118 -- # : 00:28:47.329 12:48:29 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:28:47.329 12:48:29 -- common/autotest_common.sh@120 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:28:47.329 12:48:29 -- common/autotest_common.sh@122 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:28:47.329 12:48:29 -- common/autotest_common.sh@124 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:28:47.329 12:48:29 -- common/autotest_common.sh@126 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:28:47.329 12:48:29 -- common/autotest_common.sh@128 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:28:47.329 12:48:29 -- common/autotest_common.sh@130 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:28:47.329 12:48:29 -- common/autotest_common.sh@132 -- # : 00:28:47.329 12:48:29 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:28:47.329 12:48:29 -- common/autotest_common.sh@134 -- # : true 00:28:47.329 12:48:29 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:28:47.329 12:48:29 -- common/autotest_common.sh@136 -- # : 1 00:28:47.329 12:48:29 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:28:47.329 12:48:29 -- common/autotest_common.sh@138 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:28:47.329 12:48:29 -- common/autotest_common.sh@140 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:28:47.329 12:48:29 -- common/autotest_common.sh@142 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:28:47.329 12:48:29 -- common/autotest_common.sh@144 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:28:47.329 12:48:29 -- common/autotest_common.sh@146 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:28:47.329 12:48:29 -- common/autotest_common.sh@148 -- # : 00:28:47.329 12:48:29 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:28:47.329 12:48:29 -- common/autotest_common.sh@150 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:28:47.329 12:48:29 -- common/autotest_common.sh@152 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:28:47.329 12:48:29 -- common/autotest_common.sh@154 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:28:47.329 12:48:29 -- common/autotest_common.sh@156 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:28:47.329 12:48:29 -- common/autotest_common.sh@158 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:28:47.329 12:48:29 -- common/autotest_common.sh@160 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:28:47.329 12:48:29 -- common/autotest_common.sh@163 -- # : 00:28:47.329 12:48:29 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:28:47.329 12:48:29 -- common/autotest_common.sh@165 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:28:47.329 12:48:29 -- common/autotest_common.sh@167 -- # : 0 00:28:47.329 12:48:29 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:28:47.329 12:48:29 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:28:47.329 12:48:29 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:28:47.329 12:48:29 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:28:47.329 12:48:29 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:28:47.329 12:48:29 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:28:47.329 12:48:29 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:28:47.329 12:48:29 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:28:47.329 12:48:29 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:28:47.329 12:48:29 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:28:47.329 12:48:29 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:28:47.329 12:48:29 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:28:47.329 12:48:29 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:28:47.329 12:48:29 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:28:47.330 12:48:29 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:28:47.330 12:48:29 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:47.330 12:48:29 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:47.330 12:48:29 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:47.330 12:48:29 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:47.330 12:48:29 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:28:47.330 12:48:29 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:28:47.330 12:48:29 -- common/autotest_common.sh@196 -- # cat 00:28:47.330 12:48:29 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:28:47.330 12:48:29 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:47.330 12:48:29 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:47.330 12:48:29 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:47.330 12:48:29 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:47.330 12:48:29 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:28:47.330 12:48:29 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:28:47.330 12:48:29 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:28:47.330 12:48:29 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:28:47.330 12:48:29 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:28:47.330 12:48:29 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:28:47.330 12:48:29 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:28:47.330 12:48:29 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:28:47.330 12:48:29 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:28:47.330 12:48:29 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:28:47.330 12:48:29 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:28:47.330 12:48:29 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:28:47.330 12:48:29 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:47.330 12:48:29 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:47.330 12:48:29 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:28:47.330 12:48:29 -- common/autotest_common.sh@249 -- # export valgrind= 00:28:47.330 12:48:29 -- common/autotest_common.sh@249 -- # valgrind= 00:28:47.330 12:48:29 -- common/autotest_common.sh@255 -- # uname -s 00:28:47.330 12:48:29 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:28:47.330 12:48:29 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:28:47.330 12:48:29 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:28:47.330 12:48:29 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:28:47.330 12:48:29 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:28:47.330 12:48:29 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:28:47.330 12:48:29 -- common/autotest_common.sh@265 -- # MAKE=make 00:28:47.330 12:48:29 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:28:47.330 12:48:29 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:28:47.330 12:48:29 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:28:47.330 12:48:29 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:28:47.330 12:48:29 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:28:47.330 12:48:29 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:28:47.330 12:48:29 -- common/autotest_common.sh@309 -- # [[ -z 132851 ]] 00:28:47.330 12:48:29 -- common/autotest_common.sh@309 -- # kill -0 132851 00:28:47.330 12:48:29 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:28:47.330 12:48:29 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:28:47.330 12:48:29 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:28:47.330 12:48:29 -- common/autotest_common.sh@322 -- # local mount target_dir 00:28:47.330 12:48:29 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:28:47.330 12:48:29 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:28:47.330 12:48:29 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:28:47.330 12:48:29 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:28:47.330 12:48:29 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.M09GgN 00:28:47.330 12:48:29 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:28:47.330 12:48:29 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:28:47.330 12:48:29 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:28:47.330 12:48:29 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.M09GgN/tests/interrupt /tmp/spdk.M09GgN 00:28:47.330 12:48:29 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:28:47.330 12:48:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:28:47.330 12:48:29 -- common/autotest_common.sh@318 -- # df -T 00:28:47.330 12:48:29 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248956416 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:28:47.330 12:48:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=4726784 00:28:47.330 12:48:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=10256523264 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:28:47.330 12:48:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=10343493632 00:28:47.330 12:48:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=6265810944 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268403712 00:28:47.330 12:48:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:28:47.330 12:48:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:28:47.330 12:48:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:28:47.330 12:48:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:28:47.330 12:48:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:28:47.330 12:48:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:28:47.330 12:48:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:28:47.330 12:48:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:28:47.330 12:48:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=95150477312 00:28:47.330 12:48:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:28:47.330 12:48:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=4552302592 00:28:47.330 12:48:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:28:47.330 12:48:29 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:28:47.330 * Looking for test storage... 00:28:47.330 12:48:29 -- common/autotest_common.sh@359 -- # local target_space new_size 00:28:47.330 12:48:29 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:28:47.330 12:48:29 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:28:47.330 12:48:29 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:28:47.330 12:48:29 -- common/autotest_common.sh@363 -- # mount=/ 00:28:47.330 12:48:29 -- common/autotest_common.sh@365 -- # target_space=10256523264 00:28:47.330 12:48:29 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:28:47.330 12:48:29 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:28:47.330 12:48:29 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:28:47.330 12:48:29 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:28:47.330 12:48:29 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:28:47.330 12:48:29 -- common/autotest_common.sh@372 -- # new_size=12558086144 00:28:47.330 12:48:29 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:28:47.330 12:48:29 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:28:47.330 12:48:29 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:28:47.330 12:48:29 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:28:47.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:28:47.330 12:48:29 -- common/autotest_common.sh@380 -- # return 0 00:28:47.330 12:48:29 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:28:47.330 12:48:29 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:28:47.330 12:48:29 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:28:47.330 12:48:29 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:28:47.330 12:48:29 -- common/autotest_common.sh@1672 -- # true 00:28:47.330 12:48:29 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:28:47.331 12:48:29 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:28:47.331 12:48:29 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:28:47.331 12:48:29 -- common/autotest_common.sh@27 -- # exec 00:28:47.331 12:48:29 -- common/autotest_common.sh@29 -- # exec 00:28:47.331 12:48:29 -- common/autotest_common.sh@31 -- # xtrace_restore 00:28:47.331 12:48:29 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:28:47.331 12:48:29 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:28:47.331 12:48:29 -- common/autotest_common.sh@18 -- # set -x 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:28:47.331 12:48:29 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:28:47.331 12:48:29 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:28:47.331 12:48:29 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=132892 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:28:47.331 12:48:29 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 132892 /var/tmp/spdk.sock 00:28:47.331 12:48:29 -- common/autotest_common.sh@819 -- # '[' -z 132892 ']' 00:28:47.331 12:48:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.331 12:48:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:47.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.331 12:48:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.331 12:48:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:47.331 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:28:47.331 [2024-10-01 12:48:29.828367] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:47.331 [2024-10-01 12:48:29.828515] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132892 ] 00:28:47.590 [2024-10-01 12:48:30.005402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:47.848 [2024-10-01 12:48:30.250755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.848 [2024-10-01 12:48:30.250944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.848 [2024-10-01 12:48:30.250947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.107 [2024-10-01 12:48:30.635237] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:49.045 12:48:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:49.045 12:48:31 -- common/autotest_common.sh@852 -- # return 0 00:28:49.045 12:48:31 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:28:49.045 12:48:31 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:49.304 Malloc0 00:28:49.304 Malloc1 00:28:49.304 Malloc2 00:28:49.304 12:48:31 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:28:49.304 12:48:31 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:28:49.304 12:48:31 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:49.304 12:48:31 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:49.304 5000+0 records in 00:28:49.304 5000+0 records out 00:28:49.304 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0274831 s, 373 MB/s 00:28:49.304 12:48:31 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:28:49.562 AIO0 00:28:49.562 12:48:31 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 132892 00:28:49.562 12:48:31 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 132892 without_thd 00:28:49.562 12:48:31 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=132892 00:28:49.562 12:48:31 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:28:49.563 12:48:31 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:28:49.563 12:48:31 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:28:49.563 12:48:31 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:28:49.563 12:48:31 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:28:49.563 12:48:31 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:28:49.563 12:48:31 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:49.563 12:48:31 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:49.563 12:48:31 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:28:49.822 12:48:32 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:28:49.822 12:48:32 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:28:49.822 12:48:32 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:28:49.822 spdk_thread ids are 1 on reactor0. 00:28:49.822 12:48:32 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:28:49.822 12:48:32 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:49.822 12:48:32 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132892 0 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132892 0 idle 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@33 -- # local pid=132892 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132892 -w 256 00:28:49.822 12:48:32 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132892 root 20 0 20.1t 146216 29176 S 6.7 1.2 0:00.95 reactor_0' 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@48 -- # echo 132892 root 20 0 20.1t 146216 29176 S 6.7 1.2 0:00.95 reactor_0 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:50.082 12:48:32 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:50.082 12:48:32 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132892 1 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132892 1 idle 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@33 -- # local pid=132892 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132892 -w 256 00:28:50.082 12:48:32 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132896 root 20 0 20.1t 146216 29176 S 0.0 1.2 0:00.00 reactor_1' 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@48 -- # echo 132896 root 20 0 20.1t 146216 29176 S 0.0 1.2 0:00.00 reactor_1 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:50.342 12:48:32 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:50.342 12:48:32 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132892 2 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132892 2 idle 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@33 -- # local pid=132892 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132892 -w 256 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132897 root 20 0 20.1t 146216 29176 S 0.0 1.2 0:00.00 reactor_2' 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@48 -- # echo 132897 root 20 0 20.1t 146216 29176 S 0.0 1.2 0:00.00 reactor_2 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:50.342 12:48:32 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:50.342 12:48:32 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:28:50.342 12:48:32 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:28:50.342 12:48:32 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:28:50.601 [2024-10-01 12:48:33.048069] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:50.601 12:48:33 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:28:50.861 [2024-10-01 12:48:33.239782] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:28:50.861 [2024-10-01 12:48:33.240557] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:50.861 12:48:33 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:28:51.121 [2024-10-01 12:48:33.435619] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:28:51.121 [2024-10-01 12:48:33.436456] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:51.121 12:48:33 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:51.121 12:48:33 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 132892 0 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 132892 0 busy 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@33 -- # local pid=132892 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132892 -w 256 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132892 root 20 0 20.1t 146320 29176 R 99.9 1.2 0:01.33 reactor_0' 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@48 -- # echo 132892 root 20 0 20.1t 146320 29176 R 99.9 1.2 0:01.33 reactor_0 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:51.121 12:48:33 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:51.121 12:48:33 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 132892 2 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 132892 2 busy 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@33 -- # local pid=132892 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132892 -w 256 00:28:51.121 12:48:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:28:51.381 12:48:33 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132897 root 20 0 20.1t 146320 29176 R 93.8 1.2 0:00.35 reactor_2' 00:28:51.381 12:48:33 -- interrupt/interrupt_common.sh@48 -- # echo 132897 root 20 0 20.1t 146320 29176 R 93.8 1.2 0:00.35 reactor_2 00:28:51.381 12:48:33 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:51.381 12:48:33 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:51.381 12:48:33 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.8 00:28:51.381 12:48:33 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:28:51.381 12:48:33 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:28:51.381 12:48:33 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:28:51.381 12:48:33 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:28:51.381 12:48:33 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:51.381 12:48:33 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:28:51.641 [2024-10-01 12:48:34.003680] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:28:51.641 [2024-10-01 12:48:34.004368] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:51.641 12:48:34 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:28:51.641 12:48:34 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 132892 2 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132892 2 idle 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@33 -- # local pid=132892 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132892 -w 256 00:28:51.641 12:48:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:28:51.901 12:48:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132897 root 20 0 20.1t 146388 29176 S 0.0 1.2 0:00.56 reactor_2' 00:28:51.901 12:48:34 -- interrupt/interrupt_common.sh@48 -- # echo 132897 root 20 0 20.1t 146388 29176 S 0.0 1.2 0:00.56 reactor_2 00:28:51.901 12:48:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:51.901 12:48:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:51.901 12:48:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:51.901 12:48:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:51.901 12:48:34 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:51.901 12:48:34 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:51.901 12:48:34 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:51.901 12:48:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:51.901 12:48:34 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:28:51.901 [2024-10-01 12:48:34.383662] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:28:51.901 [2024-10-01 12:48:34.384423] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:51.901 12:48:34 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:28:51.901 12:48:34 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:28:51.901 12:48:34 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:28:52.159 [2024-10-01 12:48:34.584052] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:52.159 12:48:34 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 132892 0 00:28:52.159 12:48:34 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132892 0 idle 00:28:52.159 12:48:34 -- interrupt/interrupt_common.sh@33 -- # local pid=132892 00:28:52.159 12:48:34 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:28:52.159 12:48:34 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:52.159 12:48:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:52.159 12:48:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:52.159 12:48:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:52.159 12:48:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:52.159 12:48:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:52.159 12:48:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132892 -w 256 00:28:52.160 12:48:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:28:52.429 12:48:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132892 root 20 0 20.1t 146480 29176 S 0.0 1.2 0:02.11 reactor_0' 00:28:52.429 12:48:34 -- interrupt/interrupt_common.sh@48 -- # echo 132892 root 20 0 20.1t 146480 29176 S 0.0 1.2 0:02.11 reactor_0 00:28:52.429 12:48:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:52.429 12:48:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:52.429 12:48:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:52.429 12:48:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:52.429 12:48:34 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:52.429 12:48:34 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:52.429 12:48:34 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:52.429 12:48:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:52.429 12:48:34 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:28:52.429 12:48:34 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:28:52.429 12:48:34 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:28:52.429 12:48:34 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 132892 00:28:52.429 12:48:34 -- common/autotest_common.sh@926 -- # '[' -z 132892 ']' 00:28:52.429 12:48:34 -- common/autotest_common.sh@930 -- # kill -0 132892 00:28:52.429 12:48:34 -- common/autotest_common.sh@931 -- # uname 00:28:52.429 12:48:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:52.429 12:48:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132892 00:28:52.429 12:48:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:52.429 12:48:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:52.429 12:48:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132892' 00:28:52.429 killing process with pid 132892 00:28:52.429 12:48:34 -- common/autotest_common.sh@945 -- # kill 132892 00:28:52.429 12:48:34 -- common/autotest_common.sh@950 -- # wait 132892 00:28:54.345 12:48:36 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:28:54.345 12:48:36 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:28:54.345 12:48:36 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:28:54.346 12:48:36 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.346 12:48:36 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:28:54.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.346 12:48:36 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=133056 00:28:54.346 12:48:36 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:28:54.346 12:48:36 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:54.346 12:48:36 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 133056 /var/tmp/spdk.sock 00:28:54.346 12:48:36 -- common/autotest_common.sh@819 -- # '[' -z 133056 ']' 00:28:54.346 12:48:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.346 12:48:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:54.346 12:48:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.346 12:48:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:54.346 12:48:36 -- common/autotest_common.sh@10 -- # set +x 00:28:54.346 [2024-10-01 12:48:36.762919] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:54.346 [2024-10-01 12:48:36.763194] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133056 ] 00:28:54.605 [2024-10-01 12:48:36.941081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:54.866 [2024-10-01 12:48:37.170094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.866 [2024-10-01 12:48:37.170285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.866 [2024-10-01 12:48:37.170286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.125 [2024-10-01 12:48:37.545535] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:55.125 12:48:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:55.125 12:48:37 -- common/autotest_common.sh@852 -- # return 0 00:28:55.125 12:48:37 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:28:55.125 12:48:37 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:55.695 Malloc0 00:28:55.695 Malloc1 00:28:55.695 Malloc2 00:28:55.695 12:48:37 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:28:55.695 12:48:37 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:28:55.695 12:48:37 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:55.695 12:48:37 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:55.695 5000+0 records in 00:28:55.695 5000+0 records out 00:28:55.695 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0368742 s, 278 MB/s 00:28:55.695 12:48:38 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:28:55.955 AIO0 00:28:55.955 12:48:38 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 133056 00:28:55.955 12:48:38 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 133056 00:28:55.955 12:48:38 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=133056 00:28:55.955 12:48:38 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:28:55.955 12:48:38 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:28:55.955 12:48:38 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:28:55.955 12:48:38 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:28:55.955 12:48:38 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:55.955 12:48:38 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:28:56.214 12:48:38 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:28:56.214 spdk_thread ids are 1 on reactor0. 00:28:56.214 12:48:38 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:28:56.214 12:48:38 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:56.214 12:48:38 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 133056 0 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 133056 0 idle 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@33 -- # local pid=133056 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 133056 -w 256 00:28:56.214 12:48:38 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 133056 root 20 0 20.1t 146036 29044 S 6.2 1.2 0:00.91 reactor_0' 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@48 -- # echo 133056 root 20 0 20.1t 146036 29044 S 6.2 1.2 0:00.91 reactor_0 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.2 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:56.474 12:48:38 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:56.474 12:48:38 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 133056 1 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 133056 1 idle 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@33 -- # local pid=133056 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 133056 -w 256 00:28:56.474 12:48:38 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 133063 root 20 0 20.1t 146036 29044 S 0.0 1.2 0:00.00 reactor_1' 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@48 -- # echo 133063 root 20 0 20.1t 146036 29044 S 0.0 1.2 0:00.00 reactor_1 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:56.732 12:48:39 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:56.732 12:48:39 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 133056 2 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 133056 2 idle 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@33 -- # local pid=133056 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 133056 -w 256 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 133064 root 20 0 20.1t 146036 29044 S 0.0 1.2 0:00.00 reactor_2' 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@48 -- # echo 133064 root 20 0 20.1t 146036 29044 S 0.0 1.2 0:00.00 reactor_2 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:56.732 12:48:39 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:56.732 12:48:39 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:28:56.732 12:48:39 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:28:56.992 [2024-10-01 12:48:39.421955] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:28:56.992 [2024-10-01 12:48:39.422277] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:28:56.992 [2024-10-01 12:48:39.422663] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:56.992 12:48:39 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:28:57.253 [2024-10-01 12:48:39.609358] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:28:57.253 [2024-10-01 12:48:39.609837] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:57.253 12:48:39 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:57.253 12:48:39 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 133056 0 00:28:57.253 12:48:39 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 133056 0 busy 00:28:57.253 12:48:39 -- interrupt/interrupt_common.sh@33 -- # local pid=133056 00:28:57.253 12:48:39 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:28:57.253 12:48:39 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:28:57.253 12:48:39 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:28:57.253 12:48:39 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:57.253 12:48:39 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:57.253 12:48:39 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:57.253 12:48:39 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 133056 -w 256 00:28:57.253 12:48:39 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 133056 root 20 0 20.1t 146120 29044 R 99.9 1.2 0:01.29 reactor_0' 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@48 -- # echo 133056 root 20 0 20.1t 146120 29044 R 99.9 1.2 0:01.29 reactor_0 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:57.512 12:48:39 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:57.512 12:48:39 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 133056 2 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 133056 2 busy 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@33 -- # local pid=133056 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 133056 -w 256 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 133064 root 20 0 20.1t 146120 29044 R 99.9 1.2 0:00.36 reactor_2' 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@48 -- # echo 133064 root 20 0 20.1t 146120 29044 R 99.9 1.2 0:00.36 reactor_2 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:28:57.512 12:48:39 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:57.512 12:48:39 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:28:57.772 [2024-10-01 12:48:40.184640] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:28:57.772 [2024-10-01 12:48:40.185220] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:57.772 12:48:40 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:28:57.772 12:48:40 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 133056 2 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 133056 2 idle 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@33 -- # local pid=133056 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 133056 -w 256 00:28:57.772 12:48:40 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:28:58.030 12:48:40 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 133064 root 20 0 20.1t 146188 29044 S 0.0 1.2 0:00.57 reactor_2' 00:28:58.030 12:48:40 -- interrupt/interrupt_common.sh@48 -- # echo 133064 root 20 0 20.1t 146188 29044 S 0.0 1.2 0:00.57 reactor_2 00:28:58.030 12:48:40 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:58.030 12:48:40 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:58.030 12:48:40 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:58.030 12:48:40 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:58.030 12:48:40 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:58.030 12:48:40 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:58.030 12:48:40 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:58.030 12:48:40 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:58.030 12:48:40 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:28:58.289 [2024-10-01 12:48:40.564125] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:28:58.289 [2024-10-01 12:48:40.564552] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:28:58.289 [2024-10-01 12:48:40.564614] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:58.289 12:48:40 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:28:58.289 12:48:40 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 133056 0 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 133056 0 idle 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@33 -- # local pid=133056 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 133056 -w 256 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 133056 root 20 0 20.1t 146232 29044 S 0.0 1.2 0:02.07 reactor_0' 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@48 -- # echo 133056 root 20 0 20.1t 146232 29044 S 0.0 1.2 0:02.07 reactor_0 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:58.289 12:48:40 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:58.289 12:48:40 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:28:58.289 12:48:40 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:28:58.289 12:48:40 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:58.289 12:48:40 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 133056 00:28:58.289 12:48:40 -- common/autotest_common.sh@926 -- # '[' -z 133056 ']' 00:28:58.289 12:48:40 -- common/autotest_common.sh@930 -- # kill -0 133056 00:28:58.289 12:48:40 -- common/autotest_common.sh@931 -- # uname 00:28:58.289 12:48:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:58.289 12:48:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133056 00:28:58.289 12:48:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:58.289 12:48:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:58.289 12:48:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133056' 00:28:58.289 killing process with pid 133056 00:28:58.289 12:48:40 -- common/autotest_common.sh@945 -- # kill 133056 00:28:58.289 12:48:40 -- common/autotest_common.sh@950 -- # wait 133056 00:29:00.204 12:48:42 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:29:00.204 12:48:42 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:29:00.204 ************************************ 00:29:00.204 END TEST reactor_set_interrupt 00:29:00.204 ************************************ 00:29:00.204 00:29:00.204 real 0m13.248s 00:29:00.204 user 0m12.781s 00:29:00.204 sys 0m2.193s 00:29:00.204 12:48:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.204 12:48:42 -- common/autotest_common.sh@10 -- # set +x 00:29:00.463 12:48:42 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:00.463 12:48:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.463 12:48:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.463 12:48:42 -- common/autotest_common.sh@10 -- # set +x 00:29:00.463 ************************************ 00:29:00.463 START TEST reap_unregistered_poller 00:29:00.463 ************************************ 00:29:00.463 12:48:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:00.463 * Looking for test storage... 00:29:00.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:00.463 12:48:42 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:29:00.463 12:48:42 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:00.463 12:48:42 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:00.463 12:48:42 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:00.463 12:48:42 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:29:00.463 12:48:42 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:00.463 12:48:42 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:29:00.463 12:48:42 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:29:00.463 12:48:42 -- common/autotest_common.sh@34 -- # set -e 00:29:00.463 12:48:42 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:29:00.463 12:48:42 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:29:00.463 12:48:42 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:29:00.464 12:48:42 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:29:00.464 12:48:42 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:29:00.464 12:48:42 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:29:00.464 12:48:42 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:29:00.464 12:48:42 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:29:00.464 12:48:42 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:29:00.464 12:48:42 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:29:00.464 12:48:42 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:29:00.464 12:48:42 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:29:00.464 12:48:42 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:29:00.464 12:48:42 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:29:00.464 12:48:42 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:29:00.464 12:48:42 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:29:00.464 12:48:42 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:29:00.464 12:48:42 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:29:00.464 12:48:42 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:29:00.464 12:48:42 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:29:00.464 12:48:42 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:29:00.464 12:48:42 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:29:00.464 12:48:42 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:00.464 12:48:42 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:29:00.464 12:48:42 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:29:00.464 12:48:42 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:29:00.464 12:48:42 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:29:00.464 12:48:42 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:29:00.464 12:48:42 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:29:00.464 12:48:42 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:29:00.464 12:48:42 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:29:00.464 12:48:42 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:29:00.464 12:48:42 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:29:00.464 12:48:42 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:29:00.464 12:48:42 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:29:00.464 12:48:42 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:29:00.464 12:48:42 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:29:00.464 12:48:42 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:29:00.464 12:48:42 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:29:00.464 12:48:42 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:29:00.464 12:48:42 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:29:00.464 12:48:42 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:29:00.464 12:48:42 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:29:00.464 12:48:42 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:29:00.464 12:48:42 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:29:00.464 12:48:42 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:29:00.464 12:48:42 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:29:00.464 12:48:42 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:29:00.464 12:48:42 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:29:00.464 12:48:42 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:29:00.464 12:48:42 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:29:00.464 12:48:42 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:29:00.464 12:48:42 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:29:00.464 12:48:42 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:29:00.464 12:48:42 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:29:00.464 12:48:42 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:29:00.464 12:48:42 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:29:00.464 12:48:42 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:29:00.464 12:48:42 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:29:00.464 12:48:42 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:29:00.464 12:48:42 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:29:00.464 12:48:42 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:29:00.464 12:48:42 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:29:00.464 12:48:42 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:29:00.464 12:48:42 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:29:00.464 12:48:42 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:29:00.464 12:48:42 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:29:00.464 12:48:42 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:29:00.464 12:48:42 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:29:00.464 12:48:42 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:29:00.464 12:48:42 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:29:00.464 12:48:42 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:29:00.464 12:48:42 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:29:00.464 12:48:42 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:29:00.464 12:48:42 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:29:00.464 12:48:42 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:29:00.464 12:48:42 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:29:00.464 12:48:42 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:29:00.464 12:48:42 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:29:00.464 12:48:42 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:29:00.464 12:48:42 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:29:00.464 12:48:42 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:29:00.464 12:48:42 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:29:00.464 12:48:42 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:00.464 12:48:42 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:00.464 12:48:42 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:29:00.464 12:48:42 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:29:00.464 12:48:42 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:29:00.464 12:48:42 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:29:00.464 12:48:42 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:29:00.464 12:48:42 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:29:00.464 12:48:42 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:29:00.464 12:48:42 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:29:00.464 12:48:42 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:29:00.464 12:48:42 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:29:00.464 12:48:42 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:29:00.464 12:48:42 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:29:00.464 12:48:42 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:29:00.464 12:48:42 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:29:00.464 #define SPDK_CONFIG_H 00:29:00.464 #define SPDK_CONFIG_APPS 1 00:29:00.464 #define SPDK_CONFIG_ARCH native 00:29:00.464 #define SPDK_CONFIG_ASAN 1 00:29:00.464 #undef SPDK_CONFIG_AVAHI 00:29:00.464 #undef SPDK_CONFIG_CET 00:29:00.464 #define SPDK_CONFIG_COVERAGE 1 00:29:00.464 #define SPDK_CONFIG_CROSS_PREFIX 00:29:00.464 #undef SPDK_CONFIG_CRYPTO 00:29:00.464 #undef SPDK_CONFIG_CRYPTO_MLX5 00:29:00.464 #undef SPDK_CONFIG_CUSTOMOCF 00:29:00.464 #undef SPDK_CONFIG_DAOS 00:29:00.464 #define SPDK_CONFIG_DAOS_DIR 00:29:00.464 #define SPDK_CONFIG_DEBUG 1 00:29:00.464 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:29:00.464 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:29:00.464 #define SPDK_CONFIG_DPDK_INC_DIR 00:29:00.464 #define SPDK_CONFIG_DPDK_LIB_DIR 00:29:00.464 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:29:00.464 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:00.464 #define SPDK_CONFIG_EXAMPLES 1 00:29:00.464 #undef SPDK_CONFIG_FC 00:29:00.464 #define SPDK_CONFIG_FC_PATH 00:29:00.464 #define SPDK_CONFIG_FIO_PLUGIN 1 00:29:00.464 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:29:00.464 #undef SPDK_CONFIG_FUSE 00:29:00.464 #undef SPDK_CONFIG_FUZZER 00:29:00.464 #define SPDK_CONFIG_FUZZER_LIB 00:29:00.464 #undef SPDK_CONFIG_GOLANG 00:29:00.464 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:29:00.464 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:29:00.464 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:29:00.464 #undef SPDK_CONFIG_HAVE_LIBBSD 00:29:00.464 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:29:00.464 #define SPDK_CONFIG_IDXD 1 00:29:00.464 #undef SPDK_CONFIG_IDXD_KERNEL 00:29:00.464 #undef SPDK_CONFIG_IPSEC_MB 00:29:00.464 #define SPDK_CONFIG_IPSEC_MB_DIR 00:29:00.464 #define SPDK_CONFIG_ISAL 1 00:29:00.464 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:29:00.464 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:29:00.464 #define SPDK_CONFIG_LIBDIR 00:29:00.464 #undef SPDK_CONFIG_LTO 00:29:00.464 #define SPDK_CONFIG_MAX_LCORES 00:29:00.464 #define SPDK_CONFIG_NVME_CUSE 1 00:29:00.464 #undef SPDK_CONFIG_OCF 00:29:00.464 #define SPDK_CONFIG_OCF_PATH 00:29:00.464 #define SPDK_CONFIG_OPENSSL_PATH 00:29:00.464 #undef SPDK_CONFIG_PGO_CAPTURE 00:29:00.464 #undef SPDK_CONFIG_PGO_USE 00:29:00.464 #define SPDK_CONFIG_PREFIX /usr/local 00:29:00.464 #define SPDK_CONFIG_RAID5F 1 00:29:00.464 #undef SPDK_CONFIG_RBD 00:29:00.464 #define SPDK_CONFIG_RDMA 1 00:29:00.464 #define SPDK_CONFIG_RDMA_PROV verbs 00:29:00.464 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:29:00.464 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:29:00.464 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:29:00.464 #undef SPDK_CONFIG_SHARED 00:29:00.464 #undef SPDK_CONFIG_SMA 00:29:00.464 #define SPDK_CONFIG_TESTS 1 00:29:00.464 #undef SPDK_CONFIG_TSAN 00:29:00.464 #undef SPDK_CONFIG_UBLK 00:29:00.464 #define SPDK_CONFIG_UBSAN 1 00:29:00.464 #define SPDK_CONFIG_UNIT_TESTS 1 00:29:00.464 #undef SPDK_CONFIG_URING 00:29:00.464 #define SPDK_CONFIG_URING_PATH 00:29:00.464 #undef SPDK_CONFIG_URING_ZNS 00:29:00.464 #undef SPDK_CONFIG_USDT 00:29:00.464 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:29:00.464 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:29:00.464 #undef SPDK_CONFIG_VFIO_USER 00:29:00.464 #define SPDK_CONFIG_VFIO_USER_DIR 00:29:00.464 #define SPDK_CONFIG_VHOST 1 00:29:00.464 #define SPDK_CONFIG_VIRTIO 1 00:29:00.464 #undef SPDK_CONFIG_VTUNE 00:29:00.464 #define SPDK_CONFIG_VTUNE_DIR 00:29:00.465 #define SPDK_CONFIG_WERROR 1 00:29:00.465 #define SPDK_CONFIG_WPDK_DIR 00:29:00.465 #undef SPDK_CONFIG_XNVME 00:29:00.465 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:29:00.465 12:48:42 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:29:00.465 12:48:42 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:00.465 12:48:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.465 12:48:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.465 12:48:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.465 12:48:42 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.465 12:48:42 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.465 12:48:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.465 12:48:42 -- paths/export.sh@5 -- # export PATH 00:29:00.465 12:48:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.465 12:48:42 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:00.465 12:48:42 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:00.465 12:48:42 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:00.465 12:48:42 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:00.465 12:48:42 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:29:00.465 12:48:42 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:29:00.465 12:48:42 -- pm/common@16 -- # TEST_TAG=N/A 00:29:00.465 12:48:42 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:29:00.465 12:48:42 -- common/autotest_common.sh@52 -- # : 1 00:29:00.465 12:48:42 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:29:00.465 12:48:42 -- common/autotest_common.sh@56 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:29:00.465 12:48:42 -- common/autotest_common.sh@58 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:29:00.465 12:48:42 -- common/autotest_common.sh@60 -- # : 1 00:29:00.465 12:48:42 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:29:00.465 12:48:42 -- common/autotest_common.sh@62 -- # : 1 00:29:00.465 12:48:42 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:29:00.465 12:48:42 -- common/autotest_common.sh@64 -- # : 00:29:00.465 12:48:42 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:29:00.465 12:48:42 -- common/autotest_common.sh@66 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:29:00.465 12:48:42 -- common/autotest_common.sh@68 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:29:00.465 12:48:42 -- common/autotest_common.sh@70 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:29:00.465 12:48:42 -- common/autotest_common.sh@72 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:29:00.465 12:48:42 -- common/autotest_common.sh@74 -- # : 1 00:29:00.465 12:48:42 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:29:00.465 12:48:42 -- common/autotest_common.sh@76 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:29:00.465 12:48:42 -- common/autotest_common.sh@78 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:29:00.465 12:48:42 -- common/autotest_common.sh@80 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:29:00.465 12:48:42 -- common/autotest_common.sh@82 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:29:00.465 12:48:42 -- common/autotest_common.sh@84 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:29:00.465 12:48:42 -- common/autotest_common.sh@86 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:29:00.465 12:48:42 -- common/autotest_common.sh@88 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:29:00.465 12:48:42 -- common/autotest_common.sh@90 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:29:00.465 12:48:42 -- common/autotest_common.sh@92 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:29:00.465 12:48:42 -- common/autotest_common.sh@94 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:29:00.465 12:48:42 -- common/autotest_common.sh@96 -- # : rdma 00:29:00.465 12:48:42 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:29:00.465 12:48:42 -- common/autotest_common.sh@98 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:29:00.465 12:48:42 -- common/autotest_common.sh@100 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:29:00.465 12:48:42 -- common/autotest_common.sh@102 -- # : 1 00:29:00.465 12:48:42 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:29:00.465 12:48:42 -- common/autotest_common.sh@104 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:29:00.465 12:48:42 -- common/autotest_common.sh@106 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:29:00.465 12:48:42 -- common/autotest_common.sh@108 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:29:00.465 12:48:42 -- common/autotest_common.sh@110 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:29:00.465 12:48:42 -- common/autotest_common.sh@112 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:29:00.465 12:48:42 -- common/autotest_common.sh@114 -- # : 1 00:29:00.465 12:48:42 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:29:00.465 12:48:42 -- common/autotest_common.sh@116 -- # : 1 00:29:00.465 12:48:42 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:29:00.465 12:48:42 -- common/autotest_common.sh@118 -- # : 00:29:00.465 12:48:42 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:29:00.465 12:48:42 -- common/autotest_common.sh@120 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:29:00.465 12:48:42 -- common/autotest_common.sh@122 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:29:00.465 12:48:42 -- common/autotest_common.sh@124 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:29:00.465 12:48:42 -- common/autotest_common.sh@126 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:29:00.465 12:48:42 -- common/autotest_common.sh@128 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:29:00.465 12:48:42 -- common/autotest_common.sh@130 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:29:00.465 12:48:42 -- common/autotest_common.sh@132 -- # : 00:29:00.465 12:48:42 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:29:00.465 12:48:42 -- common/autotest_common.sh@134 -- # : true 00:29:00.465 12:48:42 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:29:00.465 12:48:42 -- common/autotest_common.sh@136 -- # : 1 00:29:00.465 12:48:42 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:29:00.465 12:48:42 -- common/autotest_common.sh@138 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:29:00.465 12:48:42 -- common/autotest_common.sh@140 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:29:00.465 12:48:42 -- common/autotest_common.sh@142 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:29:00.465 12:48:42 -- common/autotest_common.sh@144 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:29:00.465 12:48:42 -- common/autotest_common.sh@146 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:29:00.465 12:48:42 -- common/autotest_common.sh@148 -- # : 00:29:00.465 12:48:42 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:29:00.465 12:48:42 -- common/autotest_common.sh@150 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:29:00.465 12:48:42 -- common/autotest_common.sh@152 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:29:00.465 12:48:42 -- common/autotest_common.sh@154 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:29:00.465 12:48:42 -- common/autotest_common.sh@156 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:29:00.465 12:48:42 -- common/autotest_common.sh@158 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:29:00.465 12:48:42 -- common/autotest_common.sh@160 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:29:00.465 12:48:42 -- common/autotest_common.sh@163 -- # : 00:29:00.465 12:48:42 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:29:00.465 12:48:42 -- common/autotest_common.sh@165 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:29:00.465 12:48:42 -- common/autotest_common.sh@167 -- # : 0 00:29:00.465 12:48:42 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:29:00.725 12:48:42 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:00.725 12:48:42 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:00.725 12:48:42 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:29:00.725 12:48:42 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:29:00.725 12:48:42 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:00.725 12:48:42 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:00.725 12:48:42 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:00.725 12:48:42 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:00.725 12:48:42 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:29:00.725 12:48:42 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:29:00.725 12:48:42 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:00.725 12:48:42 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:00.725 12:48:42 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:29:00.725 12:48:42 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:29:00.725 12:48:42 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:00.725 12:48:42 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:00.725 12:48:42 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:00.725 12:48:42 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:00.726 12:48:42 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:29:00.726 12:48:42 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:29:00.726 12:48:42 -- common/autotest_common.sh@196 -- # cat 00:29:00.726 12:48:43 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:29:00.726 12:48:43 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:00.726 12:48:43 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:00.726 12:48:43 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:00.726 12:48:43 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:00.726 12:48:43 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:29:00.726 12:48:43 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:29:00.726 12:48:43 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:00.726 12:48:43 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:00.726 12:48:43 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:00.726 12:48:43 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:00.726 12:48:43 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:29:00.726 12:48:43 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:29:00.726 12:48:43 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:29:00.726 12:48:43 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:29:00.726 12:48:43 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:00.726 12:48:43 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:00.726 12:48:43 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:00.726 12:48:43 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:00.726 12:48:43 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:29:00.726 12:48:43 -- common/autotest_common.sh@249 -- # export valgrind= 00:29:00.726 12:48:43 -- common/autotest_common.sh@249 -- # valgrind= 00:29:00.726 12:48:43 -- common/autotest_common.sh@255 -- # uname -s 00:29:00.726 12:48:43 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:29:00.726 12:48:43 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:29:00.726 12:48:43 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:29:00.726 12:48:43 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:29:00.726 12:48:43 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@265 -- # MAKE=make 00:29:00.726 12:48:43 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:29:00.726 12:48:43 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:29:00.726 12:48:43 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:29:00.726 12:48:43 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:29:00.726 12:48:43 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:29:00.726 12:48:43 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:29:00.726 12:48:43 -- common/autotest_common.sh@309 -- # [[ -z 133240 ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@309 -- # kill -0 133240 00:29:00.726 12:48:43 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:29:00.726 12:48:43 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:29:00.726 12:48:43 -- common/autotest_common.sh@322 -- # local mount target_dir 00:29:00.726 12:48:43 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:29:00.726 12:48:43 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:29:00.726 12:48:43 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:29:00.726 12:48:43 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:29:00.726 12:48:43 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.L2p0pv 00:29:00.726 12:48:43 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:29:00.726 12:48:43 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.L2p0pv/tests/interrupt /tmp/spdk.L2p0pv 00:29:00.726 12:48:43 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:29:00.726 12:48:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:29:00.726 12:48:43 -- common/autotest_common.sh@318 -- # df -T 00:29:00.726 12:48:43 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248956416 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:29:00.726 12:48:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=4726784 00:29:00.726 12:48:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=10256482304 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:29:00.726 12:48:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=10343534592 00:29:00.726 12:48:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=6265810944 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268403712 00:29:00.726 12:48:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:29:00.726 12:48:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:29:00.726 12:48:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:29:00.726 12:48:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:29:00.726 12:48:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:29:00.726 12:48:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:29:00.726 12:48:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:29:00.726 12:48:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:29:00.726 12:48:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=95064817664 00:29:00.726 12:48:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:29:00.726 12:48:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=4637962240 00:29:00.726 12:48:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:29:00.726 12:48:43 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:29:00.726 * Looking for test storage... 00:29:00.726 12:48:43 -- common/autotest_common.sh@359 -- # local target_space new_size 00:29:00.726 12:48:43 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:29:00.726 12:48:43 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:00.726 12:48:43 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:29:00.726 12:48:43 -- common/autotest_common.sh@363 -- # mount=/ 00:29:00.726 12:48:43 -- common/autotest_common.sh@365 -- # target_space=10256482304 00:29:00.726 12:48:43 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:29:00.726 12:48:43 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:29:00.726 12:48:43 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@372 -- # new_size=12558127104 00:29:00.726 12:48:43 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:29:00.726 12:48:43 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:00.726 12:48:43 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:00.726 12:48:43 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:00.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:00.726 12:48:43 -- common/autotest_common.sh@380 -- # return 0 00:29:00.726 12:48:43 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:29:00.726 12:48:43 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:29:00.726 12:48:43 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:29:00.726 12:48:43 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:29:00.726 12:48:43 -- common/autotest_common.sh@1672 -- # true 00:29:00.726 12:48:43 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:29:00.726 12:48:43 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:29:00.726 12:48:43 -- common/autotest_common.sh@27 -- # exec 00:29:00.726 12:48:43 -- common/autotest_common.sh@29 -- # exec 00:29:00.726 12:48:43 -- common/autotest_common.sh@31 -- # xtrace_restore 00:29:00.726 12:48:43 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:29:00.726 12:48:43 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:29:00.726 12:48:43 -- common/autotest_common.sh@18 -- # set -x 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:29:00.727 12:48:43 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:29:00.727 12:48:43 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:29:00.727 12:48:43 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=133281 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:00.727 12:48:43 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 133281 /var/tmp/spdk.sock 00:29:00.727 12:48:43 -- common/autotest_common.sh@819 -- # '[' -z 133281 ']' 00:29:00.727 12:48:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.727 12:48:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:00.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.727 12:48:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.727 12:48:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:00.727 12:48:43 -- common/autotest_common.sh@10 -- # set +x 00:29:00.727 [2024-10-01 12:48:43.142077] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:00.727 [2024-10-01 12:48:43.142226] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133281 ] 00:29:00.985 [2024-10-01 12:48:43.318319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:01.244 [2024-10-01 12:48:43.565697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.244 [2024-10-01 12:48:43.565886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.244 [2024-10-01 12:48:43.565887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.504 [2024-10-01 12:48:43.934601] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:01.504 12:48:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:01.504 12:48:43 -- common/autotest_common.sh@852 -- # return 0 00:29:01.504 12:48:43 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:29:01.504 12:48:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:01.504 12:48:43 -- common/autotest_common.sh@10 -- # set +x 00:29:01.504 12:48:43 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:29:01.504 12:48:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:01.504 12:48:44 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:29:01.504 "name": "app_thread", 00:29:01.504 "id": 1, 00:29:01.504 "active_pollers": [], 00:29:01.504 "timed_pollers": [ 00:29:01.504 { 00:29:01.504 "name": "rpc_subsystem_poll", 00:29:01.504 "id": 1, 00:29:01.504 "state": "waiting", 00:29:01.504 "run_count": 0, 00:29:01.504 "busy_count": 0, 00:29:01.504 "period_ticks": 9960000 00:29:01.504 } 00:29:01.504 ], 00:29:01.504 "paused_pollers": [] 00:29:01.504 }' 00:29:01.504 12:48:44 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:29:01.763 12:48:44 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:29:01.763 12:48:44 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:29:01.763 12:48:44 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:29:01.763 12:48:44 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:29:01.763 12:48:44 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:29:01.763 12:48:44 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:29:01.763 12:48:44 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:01.763 12:48:44 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:01.763 5000+0 records in 00:29:01.763 5000+0 records out 00:29:01.763 10240000 bytes (10 MB, 9.8 MiB) copied, 0.036789 s, 278 MB/s 00:29:01.763 12:48:44 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:29:02.023 AIO0 00:29:02.023 12:48:44 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:02.282 12:48:44 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:29:02.282 12:48:44 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:29:02.283 12:48:44 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:29:02.283 12:48:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:02.283 12:48:44 -- common/autotest_common.sh@10 -- # set +x 00:29:02.283 12:48:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:02.283 12:48:44 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:29:02.283 "name": "app_thread", 00:29:02.283 "id": 1, 00:29:02.283 "active_pollers": [], 00:29:02.283 "timed_pollers": [ 00:29:02.283 { 00:29:02.283 "name": "rpc_subsystem_poll", 00:29:02.283 "id": 1, 00:29:02.283 "state": "waiting", 00:29:02.283 "run_count": 0, 00:29:02.283 "busy_count": 0, 00:29:02.283 "period_ticks": 9960000 00:29:02.283 } 00:29:02.283 ], 00:29:02.283 "paused_pollers": [] 00:29:02.283 }' 00:29:02.283 12:48:44 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:29:02.283 12:48:44 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:29:02.283 12:48:44 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:29:02.542 12:48:44 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:29:02.542 12:48:44 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:29:02.542 12:48:44 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:29:02.542 12:48:44 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:29:02.542 12:48:44 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 133281 00:29:02.542 12:48:44 -- common/autotest_common.sh@926 -- # '[' -z 133281 ']' 00:29:02.542 12:48:44 -- common/autotest_common.sh@930 -- # kill -0 133281 00:29:02.542 12:48:44 -- common/autotest_common.sh@931 -- # uname 00:29:02.542 12:48:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:02.542 12:48:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133281 00:29:02.542 12:48:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:02.542 killing process with pid 133281 00:29:02.542 12:48:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:02.542 12:48:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133281' 00:29:02.542 12:48:44 -- common/autotest_common.sh@945 -- # kill 133281 00:29:02.542 12:48:44 -- common/autotest_common.sh@950 -- # wait 133281 00:29:03.920 12:48:46 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:29:03.920 12:48:46 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:29:03.920 00:29:03.920 real 0m3.665s 00:29:03.920 user 0m3.115s 00:29:03.920 sys 0m0.723s 00:29:03.920 12:48:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:03.920 ************************************ 00:29:03.920 END TEST reap_unregistered_poller 00:29:03.920 12:48:46 -- common/autotest_common.sh@10 -- # set +x 00:29:03.920 ************************************ 00:29:04.179 12:48:46 -- spdk/autotest.sh@204 -- # uname -s 00:29:04.179 12:48:46 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:29:04.179 12:48:46 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:29:04.179 12:48:46 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:29:04.179 12:48:46 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:29:04.179 12:48:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:04.179 12:48:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:04.179 12:48:46 -- common/autotest_common.sh@10 -- # set +x 00:29:04.179 ************************************ 00:29:04.179 START TEST spdk_dd 00:29:04.179 ************************************ 00:29:04.179 12:48:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:29:04.179 * Looking for test storage... 00:29:04.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:04.179 12:48:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:04.179 12:48:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.179 12:48:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.179 12:48:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.179 12:48:46 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:04.179 12:48:46 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:04.179 12:48:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:04.179 12:48:46 -- paths/export.sh@5 -- # export PATH 00:29:04.179 12:48:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:04.179 12:48:46 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:04.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:04.748 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:06.657 12:48:48 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:29:06.657 12:48:48 -- dd/dd.sh@11 -- # nvme_in_userspace 00:29:06.657 12:48:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:29:06.657 12:48:48 -- scripts/common.sh@312 -- # local nvmes 00:29:06.657 12:48:48 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:29:06.657 12:48:48 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:29:06.657 12:48:48 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:29:06.657 12:48:48 -- scripts/common.sh@297 -- # local bdf= 00:29:06.657 12:48:48 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:29:06.657 12:48:48 -- scripts/common.sh@232 -- # local class 00:29:06.657 12:48:48 -- scripts/common.sh@233 -- # local subclass 00:29:06.657 12:48:48 -- scripts/common.sh@234 -- # local progif 00:29:06.657 12:48:48 -- scripts/common.sh@235 -- # printf %02x 1 00:29:06.657 12:48:48 -- scripts/common.sh@235 -- # class=01 00:29:06.657 12:48:48 -- scripts/common.sh@236 -- # printf %02x 8 00:29:06.657 12:48:48 -- scripts/common.sh@236 -- # subclass=08 00:29:06.657 12:48:48 -- scripts/common.sh@237 -- # printf %02x 2 00:29:06.657 12:48:48 -- scripts/common.sh@237 -- # progif=02 00:29:06.657 12:48:48 -- scripts/common.sh@239 -- # hash lspci 00:29:06.657 12:48:48 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:29:06.657 12:48:48 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:29:06.657 12:48:48 -- scripts/common.sh@242 -- # grep -i -- -p02 00:29:06.657 12:48:48 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:29:06.657 12:48:48 -- scripts/common.sh@244 -- # tr -d '"' 00:29:06.657 12:48:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:06.657 12:48:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:29:06.657 12:48:48 -- scripts/common.sh@15 -- # local i 00:29:06.657 12:48:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:29:06.657 12:48:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:06.657 12:48:48 -- scripts/common.sh@24 -- # return 0 00:29:06.657 12:48:48 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:29:06.657 12:48:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:29:06.657 12:48:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:29:06.657 12:48:48 -- scripts/common.sh@322 -- # uname -s 00:29:06.657 12:48:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:29:06.657 12:48:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:29:06.657 12:48:48 -- scripts/common.sh@327 -- # (( 1 )) 00:29:06.657 12:48:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:29:06.657 12:48:48 -- dd/dd.sh@13 -- # check_liburing 00:29:06.657 12:48:48 -- dd/common.sh@139 -- # local lib so 00:29:06.657 12:48:48 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:29:06.657 12:48:48 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.657 12:48:48 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:29:06.657 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.658 12:48:48 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:29:06.658 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.658 12:48:48 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:29:06.658 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.658 12:48:48 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:29:06.658 12:48:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:29:06.658 12:48:48 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:29:06.658 12:48:48 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:29:06.658 12:48:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:06.658 12:48:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.658 12:48:48 -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 ************************************ 00:29:06.658 START TEST spdk_dd_basic_rw 00:29:06.658 ************************************ 00:29:06.658 12:48:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:29:06.658 * Looking for test storage... 00:29:06.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:06.658 12:48:49 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:06.658 12:48:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.658 12:48:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.658 12:48:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.658 12:48:49 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:06.658 12:48:49 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:06.658 12:48:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:06.658 12:48:49 -- paths/export.sh@5 -- # export PATH 00:29:06.658 12:48:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:06.658 12:48:49 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:29:06.658 12:48:49 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:29:06.658 12:48:49 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:29:06.658 12:48:49 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:29:06.658 12:48:49 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:29:06.658 12:48:49 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:29:06.658 12:48:49 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:29:06.658 12:48:49 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:06.658 12:48:49 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:06.658 12:48:49 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:29:06.658 12:48:49 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:29:06.658 12:48:49 -- dd/common.sh@126 -- # mapfile -t id 00:29:06.658 12:48:49 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:29:06.920 12:48:49 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2217 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:29:06.920 12:48:49 -- dd/common.sh@130 -- # lbaf=04 00:29:06.920 12:48:49 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2217 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:29:06.920 12:48:49 -- dd/common.sh@132 -- # lbaf=4096 00:29:06.920 12:48:49 -- dd/common.sh@134 -- # echo 4096 00:29:06.920 12:48:49 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:29:06.920 12:48:49 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:29:06.920 12:48:49 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:29:06.920 12:48:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.920 12:48:49 -- dd/basic_rw.sh@96 -- # gen_conf 00:29:06.920 12:48:49 -- common/autotest_common.sh@10 -- # set +x 00:29:06.920 12:48:49 -- dd/common.sh@31 -- # xtrace_disable 00:29:06.920 12:48:49 -- common/autotest_common.sh@10 -- # set +x 00:29:06.920 12:48:49 -- dd/basic_rw.sh@96 -- # : 00:29:06.920 ************************************ 00:29:06.920 START TEST dd_bs_lt_native_bs 00:29:06.920 ************************************ 00:29:06.920 12:48:49 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:29:06.920 12:48:49 -- common/autotest_common.sh@640 -- # local es=0 00:29:06.920 12:48:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:29:06.920 12:48:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.920 12:48:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.920 12:48:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.920 12:48:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.921 12:48:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.921 12:48:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.921 12:48:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.921 12:48:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:06.921 12:48:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:29:06.921 { 00:29:06.921 "subsystems": [ 00:29:06.921 { 00:29:06.921 "subsystem": "bdev", 00:29:06.921 "config": [ 00:29:06.921 { 00:29:06.921 "params": { 00:29:06.921 "trtype": "pcie", 00:29:06.921 "traddr": "0000:00:06.0", 00:29:06.921 "name": "Nvme0" 00:29:06.921 }, 00:29:06.921 "method": "bdev_nvme_attach_controller" 00:29:06.921 }, 00:29:06.921 { 00:29:06.921 "method": "bdev_wait_for_examine" 00:29:06.921 } 00:29:06.921 ] 00:29:06.921 } 00:29:06.921 ] 00:29:06.921 } 00:29:06.921 [2024-10-01 12:48:49.417261] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:06.921 [2024-10-01 12:48:49.417400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133605 ] 00:29:07.180 [2024-10-01 12:48:49.586580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.438 [2024-10-01 12:48:49.799811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.697 [2024-10-01 12:48:50.228376] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:29:07.697 [2024-10-01 12:48:50.228479] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:08.635 [2024-10-01 12:48:51.093867] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:09.203 12:48:51 -- common/autotest_common.sh@643 -- # es=234 00:29:09.203 12:48:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:09.203 12:48:51 -- common/autotest_common.sh@652 -- # es=106 00:29:09.203 12:48:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:09.203 12:48:51 -- common/autotest_common.sh@660 -- # es=1 00:29:09.203 12:48:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:09.203 00:29:09.203 real 0m2.232s 00:29:09.203 user 0m1.867s 00:29:09.203 sys 0m0.322s 00:29:09.203 ************************************ 00:29:09.203 END TEST dd_bs_lt_native_bs 00:29:09.203 ************************************ 00:29:09.203 12:48:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.203 12:48:51 -- common/autotest_common.sh@10 -- # set +x 00:29:09.203 12:48:51 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:29:09.203 12:48:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:09.203 12:48:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:09.203 12:48:51 -- common/autotest_common.sh@10 -- # set +x 00:29:09.203 ************************************ 00:29:09.203 START TEST dd_rw 00:29:09.203 ************************************ 00:29:09.203 12:48:51 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:29:09.203 12:48:51 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:29:09.203 12:48:51 -- dd/basic_rw.sh@12 -- # local count size 00:29:09.203 12:48:51 -- dd/basic_rw.sh@13 -- # local qds bss 00:29:09.203 12:48:51 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:29:09.203 12:48:51 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:29:09.203 12:48:51 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:29:09.203 12:48:51 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:29:09.204 12:48:51 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:29:09.204 12:48:51 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:29:09.204 12:48:51 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:29:09.204 12:48:51 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:29:09.204 12:48:51 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:09.204 12:48:51 -- dd/basic_rw.sh@23 -- # count=15 00:29:09.204 12:48:51 -- dd/basic_rw.sh@24 -- # count=15 00:29:09.204 12:48:51 -- dd/basic_rw.sh@25 -- # size=61440 00:29:09.204 12:48:51 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:29:09.204 12:48:51 -- dd/common.sh@98 -- # xtrace_disable 00:29:09.204 12:48:51 -- common/autotest_common.sh@10 -- # set +x 00:29:09.775 12:48:52 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:29:09.775 12:48:52 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:09.775 12:48:52 -- dd/common.sh@31 -- # xtrace_disable 00:29:09.775 12:48:52 -- common/autotest_common.sh@10 -- # set +x 00:29:09.775 { 00:29:09.775 "subsystems": [ 00:29:09.775 { 00:29:09.775 "subsystem": "bdev", 00:29:09.775 "config": [ 00:29:09.775 { 00:29:09.775 "params": { 00:29:09.775 "trtype": "pcie", 00:29:09.775 "traddr": "0000:00:06.0", 00:29:09.775 "name": "Nvme0" 00:29:09.775 }, 00:29:09.775 "method": "bdev_nvme_attach_controller" 00:29:09.775 }, 00:29:09.775 { 00:29:09.775 "method": "bdev_wait_for_examine" 00:29:09.775 } 00:29:09.775 ] 00:29:09.775 } 00:29:09.775 ] 00:29:09.775 } 00:29:09.775 [2024-10-01 12:48:52.222469] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:09.775 [2024-10-01 12:48:52.222617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133665 ] 00:29:10.033 [2024-10-01 12:48:52.392965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.292 [2024-10-01 12:48:52.626783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.930  Copying: 60/60 [kB] (average 19 MBps) 00:29:11.930 00:29:11.930 12:48:54 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:29:11.930 12:48:54 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:11.930 12:48:54 -- dd/common.sh@31 -- # xtrace_disable 00:29:11.930 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:29:11.930 { 00:29:11.930 "subsystems": [ 00:29:11.930 { 00:29:11.930 "subsystem": "bdev", 00:29:11.930 "config": [ 00:29:11.930 { 00:29:11.930 "params": { 00:29:11.930 "trtype": "pcie", 00:29:11.930 "traddr": "0000:00:06.0", 00:29:11.930 "name": "Nvme0" 00:29:11.930 }, 00:29:11.930 "method": "bdev_nvme_attach_controller" 00:29:11.930 }, 00:29:11.930 { 00:29:11.930 "method": "bdev_wait_for_examine" 00:29:11.930 } 00:29:11.930 ] 00:29:11.930 } 00:29:11.930 ] 00:29:11.930 } 00:29:11.930 [2024-10-01 12:48:54.418954] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:11.930 [2024-10-01 12:48:54.419107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133699 ] 00:29:12.189 [2024-10-01 12:48:54.587330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.447 [2024-10-01 12:48:54.822684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.083  Copying: 60/60 [kB] (average 29 MBps) 00:29:14.083 00:29:14.342 12:48:56 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:14.342 12:48:56 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:29:14.342 12:48:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:14.342 12:48:56 -- dd/common.sh@11 -- # local nvme_ref= 00:29:14.342 12:48:56 -- dd/common.sh@12 -- # local size=61440 00:29:14.342 12:48:56 -- dd/common.sh@14 -- # local bs=1048576 00:29:14.342 12:48:56 -- dd/common.sh@15 -- # local count=1 00:29:14.342 12:48:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:14.342 12:48:56 -- dd/common.sh@18 -- # gen_conf 00:29:14.342 12:48:56 -- dd/common.sh@31 -- # xtrace_disable 00:29:14.342 12:48:56 -- common/autotest_common.sh@10 -- # set +x 00:29:14.342 { 00:29:14.342 "subsystems": [ 00:29:14.342 { 00:29:14.342 "subsystem": "bdev", 00:29:14.342 "config": [ 00:29:14.342 { 00:29:14.342 "params": { 00:29:14.342 "trtype": "pcie", 00:29:14.342 "traddr": "0000:00:06.0", 00:29:14.342 "name": "Nvme0" 00:29:14.342 }, 00:29:14.342 "method": "bdev_nvme_attach_controller" 00:29:14.342 }, 00:29:14.342 { 00:29:14.342 "method": "bdev_wait_for_examine" 00:29:14.342 } 00:29:14.342 ] 00:29:14.342 } 00:29:14.342 ] 00:29:14.342 } 00:29:14.342 [2024-10-01 12:48:56.713650] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:14.342 [2024-10-01 12:48:56.713802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133732 ] 00:29:14.601 [2024-10-01 12:48:56.883587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.601 [2024-10-01 12:48:57.125554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.544  Copying: 1024/1024 [kB] (average 500 MBps) 00:29:16.544 00:29:16.544 12:48:58 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:16.544 12:48:58 -- dd/basic_rw.sh@23 -- # count=15 00:29:16.544 12:48:58 -- dd/basic_rw.sh@24 -- # count=15 00:29:16.544 12:48:58 -- dd/basic_rw.sh@25 -- # size=61440 00:29:16.544 12:48:58 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:29:16.544 12:48:58 -- dd/common.sh@98 -- # xtrace_disable 00:29:16.544 12:48:58 -- common/autotest_common.sh@10 -- # set +x 00:29:16.801 12:48:59 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:29:16.801 12:48:59 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:16.801 12:48:59 -- dd/common.sh@31 -- # xtrace_disable 00:29:16.801 12:48:59 -- common/autotest_common.sh@10 -- # set +x 00:29:17.059 { 00:29:17.059 "subsystems": [ 00:29:17.059 { 00:29:17.059 "subsystem": "bdev", 00:29:17.059 "config": [ 00:29:17.059 { 00:29:17.059 "params": { 00:29:17.059 "trtype": "pcie", 00:29:17.059 "traddr": "0000:00:06.0", 00:29:17.059 "name": "Nvme0" 00:29:17.059 }, 00:29:17.059 "method": "bdev_nvme_attach_controller" 00:29:17.059 }, 00:29:17.059 { 00:29:17.059 "method": "bdev_wait_for_examine" 00:29:17.059 } 00:29:17.059 ] 00:29:17.059 } 00:29:17.059 ] 00:29:17.059 } 00:29:17.059 [2024-10-01 12:48:59.393495] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:17.059 [2024-10-01 12:48:59.393657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133773 ] 00:29:17.059 [2024-10-01 12:48:59.565263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.316 [2024-10-01 12:48:59.801005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.259  Copying: 60/60 [kB] (average 58 MBps) 00:29:19.259 00:29:19.259 12:49:01 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:29:19.259 12:49:01 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:19.259 12:49:01 -- dd/common.sh@31 -- # xtrace_disable 00:29:19.259 12:49:01 -- common/autotest_common.sh@10 -- # set +x 00:29:19.259 { 00:29:19.259 "subsystems": [ 00:29:19.259 { 00:29:19.259 "subsystem": "bdev", 00:29:19.259 "config": [ 00:29:19.259 { 00:29:19.259 "params": { 00:29:19.259 "trtype": "pcie", 00:29:19.259 "traddr": "0000:00:06.0", 00:29:19.259 "name": "Nvme0" 00:29:19.259 }, 00:29:19.259 "method": "bdev_nvme_attach_controller" 00:29:19.259 }, 00:29:19.259 { 00:29:19.259 "method": "bdev_wait_for_examine" 00:29:19.259 } 00:29:19.259 ] 00:29:19.259 } 00:29:19.259 ] 00:29:19.259 } 00:29:19.259 [2024-10-01 12:49:01.691490] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:19.259 [2024-10-01 12:49:01.691693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133805 ] 00:29:19.518 [2024-10-01 12:49:01.862204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.778 [2024-10-01 12:49:02.086823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.416  Copying: 60/60 [kB] (average 58 MBps) 00:29:21.416 00:29:21.416 12:49:03 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:21.416 12:49:03 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:29:21.416 12:49:03 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:21.416 12:49:03 -- dd/common.sh@11 -- # local nvme_ref= 00:29:21.416 12:49:03 -- dd/common.sh@12 -- # local size=61440 00:29:21.416 12:49:03 -- dd/common.sh@14 -- # local bs=1048576 00:29:21.416 12:49:03 -- dd/common.sh@15 -- # local count=1 00:29:21.416 12:49:03 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:21.416 12:49:03 -- dd/common.sh@18 -- # gen_conf 00:29:21.416 12:49:03 -- dd/common.sh@31 -- # xtrace_disable 00:29:21.416 12:49:03 -- common/autotest_common.sh@10 -- # set +x 00:29:21.416 { 00:29:21.416 "subsystems": [ 00:29:21.416 { 00:29:21.416 "subsystem": "bdev", 00:29:21.416 "config": [ 00:29:21.416 { 00:29:21.416 "params": { 00:29:21.416 "trtype": "pcie", 00:29:21.416 "traddr": "0000:00:06.0", 00:29:21.416 "name": "Nvme0" 00:29:21.416 }, 00:29:21.416 "method": "bdev_nvme_attach_controller" 00:29:21.416 }, 00:29:21.416 { 00:29:21.416 "method": "bdev_wait_for_examine" 00:29:21.416 } 00:29:21.416 ] 00:29:21.416 } 00:29:21.416 ] 00:29:21.416 } 00:29:21.416 [2024-10-01 12:49:03.886019] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:21.416 [2024-10-01 12:49:03.886311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133838 ] 00:29:21.675 [2024-10-01 12:49:04.056976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.934 [2024-10-01 12:49:04.291077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.572  Copying: 1024/1024 [kB] (average 500 MBps) 00:29:23.572 00:29:23.832 12:49:06 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:29:23.832 12:49:06 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:23.832 12:49:06 -- dd/basic_rw.sh@23 -- # count=7 00:29:23.832 12:49:06 -- dd/basic_rw.sh@24 -- # count=7 00:29:23.832 12:49:06 -- dd/basic_rw.sh@25 -- # size=57344 00:29:23.832 12:49:06 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:29:23.832 12:49:06 -- dd/common.sh@98 -- # xtrace_disable 00:29:23.832 12:49:06 -- common/autotest_common.sh@10 -- # set +x 00:29:24.090 12:49:06 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:29:24.091 12:49:06 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:24.091 12:49:06 -- dd/common.sh@31 -- # xtrace_disable 00:29:24.091 12:49:06 -- common/autotest_common.sh@10 -- # set +x 00:29:24.091 { 00:29:24.091 "subsystems": [ 00:29:24.091 { 00:29:24.091 "subsystem": "bdev", 00:29:24.091 "config": [ 00:29:24.091 { 00:29:24.091 "params": { 00:29:24.091 "trtype": "pcie", 00:29:24.091 "traddr": "0000:00:06.0", 00:29:24.091 "name": "Nvme0" 00:29:24.091 }, 00:29:24.091 "method": "bdev_nvme_attach_controller" 00:29:24.091 }, 00:29:24.091 { 00:29:24.091 "method": "bdev_wait_for_examine" 00:29:24.091 } 00:29:24.091 ] 00:29:24.091 } 00:29:24.091 ] 00:29:24.091 } 00:29:24.091 [2024-10-01 12:49:06.622809] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:24.091 [2024-10-01 12:49:06.622972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133877 ] 00:29:24.350 [2024-10-01 12:49:06.792845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.611 [2024-10-01 12:49:07.023267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.349  Copying: 56/56 [kB] (average 27 MBps) 00:29:26.349 00:29:26.349 12:49:08 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:29:26.349 12:49:08 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:26.349 12:49:08 -- dd/common.sh@31 -- # xtrace_disable 00:29:26.349 12:49:08 -- common/autotest_common.sh@10 -- # set +x 00:29:26.349 { 00:29:26.349 "subsystems": [ 00:29:26.349 { 00:29:26.349 "subsystem": "bdev", 00:29:26.349 "config": [ 00:29:26.349 { 00:29:26.349 "params": { 00:29:26.349 "trtype": "pcie", 00:29:26.349 "traddr": "0000:00:06.0", 00:29:26.349 "name": "Nvme0" 00:29:26.349 }, 00:29:26.349 "method": "bdev_nvme_attach_controller" 00:29:26.349 }, 00:29:26.349 { 00:29:26.349 "method": "bdev_wait_for_examine" 00:29:26.349 } 00:29:26.349 ] 00:29:26.349 } 00:29:26.349 ] 00:29:26.349 } 00:29:26.349 [2024-10-01 12:49:08.814358] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:26.349 [2024-10-01 12:49:08.814506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133911 ] 00:29:26.609 [2024-10-01 12:49:08.983852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.870 [2024-10-01 12:49:09.212237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.508  Copying: 56/56 [kB] (average 27 MBps) 00:29:28.508 00:29:28.508 12:49:11 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:28.508 12:49:11 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:29:28.508 12:49:11 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:28.508 12:49:11 -- dd/common.sh@11 -- # local nvme_ref= 00:29:28.508 12:49:11 -- dd/common.sh@12 -- # local size=57344 00:29:28.508 12:49:11 -- dd/common.sh@14 -- # local bs=1048576 00:29:28.508 12:49:11 -- dd/common.sh@15 -- # local count=1 00:29:28.508 12:49:11 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:28.508 12:49:11 -- dd/common.sh@18 -- # gen_conf 00:29:28.508 12:49:11 -- dd/common.sh@31 -- # xtrace_disable 00:29:28.508 12:49:11 -- common/autotest_common.sh@10 -- # set +x 00:29:28.767 { 00:29:28.767 "subsystems": [ 00:29:28.767 { 00:29:28.767 "subsystem": "bdev", 00:29:28.767 "config": [ 00:29:28.767 { 00:29:28.767 "params": { 00:29:28.767 "trtype": "pcie", 00:29:28.767 "traddr": "0000:00:06.0", 00:29:28.767 "name": "Nvme0" 00:29:28.767 }, 00:29:28.767 "method": "bdev_nvme_attach_controller" 00:29:28.767 }, 00:29:28.767 { 00:29:28.767 "method": "bdev_wait_for_examine" 00:29:28.767 } 00:29:28.767 ] 00:29:28.767 } 00:29:28.767 ] 00:29:28.767 } 00:29:28.767 [2024-10-01 12:49:11.099765] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:28.767 [2024-10-01 12:49:11.099931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133944 ] 00:29:28.767 [2024-10-01 12:49:11.270315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.027 [2024-10-01 12:49:11.506998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.978  Copying: 1024/1024 [kB] (average 500 MBps) 00:29:30.978 00:29:30.978 12:49:13 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:30.978 12:49:13 -- dd/basic_rw.sh@23 -- # count=7 00:29:30.978 12:49:13 -- dd/basic_rw.sh@24 -- # count=7 00:29:30.978 12:49:13 -- dd/basic_rw.sh@25 -- # size=57344 00:29:30.978 12:49:13 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:29:30.978 12:49:13 -- dd/common.sh@98 -- # xtrace_disable 00:29:30.978 12:49:13 -- common/autotest_common.sh@10 -- # set +x 00:29:31.237 12:49:13 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:29:31.237 12:49:13 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:31.237 12:49:13 -- dd/common.sh@31 -- # xtrace_disable 00:29:31.237 12:49:13 -- common/autotest_common.sh@10 -- # set +x 00:29:31.237 { 00:29:31.237 "subsystems": [ 00:29:31.237 { 00:29:31.237 "subsystem": "bdev", 00:29:31.237 "config": [ 00:29:31.237 { 00:29:31.237 "params": { 00:29:31.237 "trtype": "pcie", 00:29:31.237 "traddr": "0000:00:06.0", 00:29:31.237 "name": "Nvme0" 00:29:31.237 }, 00:29:31.237 "method": "bdev_nvme_attach_controller" 00:29:31.237 }, 00:29:31.237 { 00:29:31.237 "method": "bdev_wait_for_examine" 00:29:31.237 } 00:29:31.237 ] 00:29:31.237 } 00:29:31.237 ] 00:29:31.237 } 00:29:31.237 [2024-10-01 12:49:13.764921] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:31.237 [2024-10-01 12:49:13.765091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133983 ] 00:29:31.496 [2024-10-01 12:49:13.936814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.755 [2024-10-01 12:49:14.172276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.702  Copying: 56/56 [kB] (average 54 MBps) 00:29:33.702 00:29:33.702 12:49:16 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:29:33.702 12:49:16 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:33.702 12:49:16 -- dd/common.sh@31 -- # xtrace_disable 00:29:33.702 12:49:16 -- common/autotest_common.sh@10 -- # set +x 00:29:33.702 { 00:29:33.702 "subsystems": [ 00:29:33.702 { 00:29:33.702 "subsystem": "bdev", 00:29:33.702 "config": [ 00:29:33.702 { 00:29:33.702 "params": { 00:29:33.702 "trtype": "pcie", 00:29:33.702 "traddr": "0000:00:06.0", 00:29:33.702 "name": "Nvme0" 00:29:33.702 }, 00:29:33.702 "method": "bdev_nvme_attach_controller" 00:29:33.702 }, 00:29:33.702 { 00:29:33.702 "method": "bdev_wait_for_examine" 00:29:33.702 } 00:29:33.702 ] 00:29:33.702 } 00:29:33.702 ] 00:29:33.702 } 00:29:33.702 [2024-10-01 12:49:16.085556] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:33.702 [2024-10-01 12:49:16.085736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134015 ] 00:29:34.015 [2024-10-01 12:49:16.259254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.285 [2024-10-01 12:49:16.501980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.921  Copying: 56/56 [kB] (average 54 MBps) 00:29:35.921 00:29:35.921 12:49:18 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:35.921 12:49:18 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:29:35.921 12:49:18 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:35.921 12:49:18 -- dd/common.sh@11 -- # local nvme_ref= 00:29:35.921 12:49:18 -- dd/common.sh@12 -- # local size=57344 00:29:35.921 12:49:18 -- dd/common.sh@14 -- # local bs=1048576 00:29:35.921 12:49:18 -- dd/common.sh@15 -- # local count=1 00:29:35.921 12:49:18 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:35.921 12:49:18 -- dd/common.sh@18 -- # gen_conf 00:29:35.921 12:49:18 -- dd/common.sh@31 -- # xtrace_disable 00:29:35.921 12:49:18 -- common/autotest_common.sh@10 -- # set +x 00:29:35.921 { 00:29:35.921 "subsystems": [ 00:29:35.921 { 00:29:35.921 "subsystem": "bdev", 00:29:35.921 "config": [ 00:29:35.921 { 00:29:35.921 "params": { 00:29:35.921 "trtype": "pcie", 00:29:35.921 "traddr": "0000:00:06.0", 00:29:35.921 "name": "Nvme0" 00:29:35.921 }, 00:29:35.921 "method": "bdev_nvme_attach_controller" 00:29:35.921 }, 00:29:35.921 { 00:29:35.921 "method": "bdev_wait_for_examine" 00:29:35.921 } 00:29:35.921 ] 00:29:35.921 } 00:29:35.921 ] 00:29:35.921 } 00:29:35.921 [2024-10-01 12:49:18.336714] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:35.921 [2024-10-01 12:49:18.336945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134049 ] 00:29:36.181 [2024-10-01 12:49:18.507179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.440 [2024-10-01 12:49:18.751126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.074  Copying: 1024/1024 [kB] (average 500 MBps) 00:29:38.074 00:29:38.074 12:49:20 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:29:38.074 12:49:20 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:38.074 12:49:20 -- dd/basic_rw.sh@23 -- # count=3 00:29:38.074 12:49:20 -- dd/basic_rw.sh@24 -- # count=3 00:29:38.074 12:49:20 -- dd/basic_rw.sh@25 -- # size=49152 00:29:38.074 12:49:20 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:29:38.074 12:49:20 -- dd/common.sh@98 -- # xtrace_disable 00:29:38.074 12:49:20 -- common/autotest_common.sh@10 -- # set +x 00:29:38.641 12:49:20 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:29:38.641 12:49:20 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:38.641 12:49:20 -- dd/common.sh@31 -- # xtrace_disable 00:29:38.641 12:49:20 -- common/autotest_common.sh@10 -- # set +x 00:29:38.641 { 00:29:38.641 "subsystems": [ 00:29:38.641 { 00:29:38.641 "subsystem": "bdev", 00:29:38.641 "config": [ 00:29:38.641 { 00:29:38.641 "params": { 00:29:38.641 "trtype": "pcie", 00:29:38.641 "traddr": "0000:00:06.0", 00:29:38.641 "name": "Nvme0" 00:29:38.641 }, 00:29:38.641 "method": "bdev_nvme_attach_controller" 00:29:38.641 }, 00:29:38.641 { 00:29:38.641 "method": "bdev_wait_for_examine" 00:29:38.641 } 00:29:38.641 ] 00:29:38.641 } 00:29:38.641 ] 00:29:38.641 } 00:29:38.641 [2024-10-01 12:49:21.026335] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:38.641 [2024-10-01 12:49:21.026495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134089 ] 00:29:38.901 [2024-10-01 12:49:21.195446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.160 [2024-10-01 12:49:21.441523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.797  Copying: 48/48 [kB] (average 46 MBps) 00:29:40.797 00:29:40.797 12:49:23 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:29:40.797 12:49:23 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:40.797 12:49:23 -- dd/common.sh@31 -- # xtrace_disable 00:29:40.797 12:49:23 -- common/autotest_common.sh@10 -- # set +x 00:29:40.797 { 00:29:40.797 "subsystems": [ 00:29:40.797 { 00:29:40.797 "subsystem": "bdev", 00:29:40.797 "config": [ 00:29:40.797 { 00:29:40.797 "params": { 00:29:40.797 "trtype": "pcie", 00:29:40.797 "traddr": "0000:00:06.0", 00:29:40.797 "name": "Nvme0" 00:29:40.797 }, 00:29:40.797 "method": "bdev_nvme_attach_controller" 00:29:40.797 }, 00:29:40.797 { 00:29:40.797 "method": "bdev_wait_for_examine" 00:29:40.797 } 00:29:40.797 ] 00:29:40.797 } 00:29:40.797 ] 00:29:40.797 } 00:29:40.797 [2024-10-01 12:49:23.236415] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:40.797 [2024-10-01 12:49:23.236585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134120 ] 00:29:41.055 [2024-10-01 12:49:23.406644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.314 [2024-10-01 12:49:23.655451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.998  Copying: 48/48 [kB] (average 46 MBps) 00:29:42.998 00:29:42.998 12:49:25 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:42.998 12:49:25 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:29:42.998 12:49:25 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:42.998 12:49:25 -- dd/common.sh@11 -- # local nvme_ref= 00:29:42.998 12:49:25 -- dd/common.sh@12 -- # local size=49152 00:29:42.998 12:49:25 -- dd/common.sh@14 -- # local bs=1048576 00:29:42.998 12:49:25 -- dd/common.sh@15 -- # local count=1 00:29:42.998 12:49:25 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:42.998 12:49:25 -- dd/common.sh@18 -- # gen_conf 00:29:42.998 12:49:25 -- dd/common.sh@31 -- # xtrace_disable 00:29:42.998 12:49:25 -- common/autotest_common.sh@10 -- # set +x 00:29:43.256 { 00:29:43.256 "subsystems": [ 00:29:43.256 { 00:29:43.256 "subsystem": "bdev", 00:29:43.256 "config": [ 00:29:43.256 { 00:29:43.256 "params": { 00:29:43.256 "trtype": "pcie", 00:29:43.256 "traddr": "0000:00:06.0", 00:29:43.256 "name": "Nvme0" 00:29:43.256 }, 00:29:43.256 "method": "bdev_nvme_attach_controller" 00:29:43.256 }, 00:29:43.256 { 00:29:43.256 "method": "bdev_wait_for_examine" 00:29:43.256 } 00:29:43.256 ] 00:29:43.256 } 00:29:43.256 ] 00:29:43.256 } 00:29:43.256 [2024-10-01 12:49:25.581760] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:43.256 [2024-10-01 12:49:25.581986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134149 ] 00:29:43.256 [2024-10-01 12:49:25.766954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.515 [2024-10-01 12:49:26.014953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.459  Copying: 1024/1024 [kB] (average 1000 MBps) 00:29:45.459 00:29:45.459 12:49:27 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:29:45.459 12:49:27 -- dd/basic_rw.sh@23 -- # count=3 00:29:45.459 12:49:27 -- dd/basic_rw.sh@24 -- # count=3 00:29:45.459 12:49:27 -- dd/basic_rw.sh@25 -- # size=49152 00:29:45.459 12:49:27 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:29:45.459 12:49:27 -- dd/common.sh@98 -- # xtrace_disable 00:29:45.459 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:29:45.718 12:49:28 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:29:45.718 12:49:28 -- dd/basic_rw.sh@30 -- # gen_conf 00:29:45.718 12:49:28 -- dd/common.sh@31 -- # xtrace_disable 00:29:45.718 12:49:28 -- common/autotest_common.sh@10 -- # set +x 00:29:45.718 { 00:29:45.718 "subsystems": [ 00:29:45.718 { 00:29:45.718 "subsystem": "bdev", 00:29:45.718 "config": [ 00:29:45.718 { 00:29:45.718 "params": { 00:29:45.718 "trtype": "pcie", 00:29:45.718 "traddr": "0000:00:06.0", 00:29:45.718 "name": "Nvme0" 00:29:45.718 }, 00:29:45.718 "method": "bdev_nvme_attach_controller" 00:29:45.718 }, 00:29:45.718 { 00:29:45.718 "method": "bdev_wait_for_examine" 00:29:45.718 } 00:29:45.718 ] 00:29:45.718 } 00:29:45.718 ] 00:29:45.718 } 00:29:45.718 [2024-10-01 12:49:28.183765] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:45.718 [2024-10-01 12:49:28.183955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134195 ] 00:29:45.976 [2024-10-01 12:49:28.356108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.234 [2024-10-01 12:49:28.600698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.181  Copying: 48/48 [kB] (average 46 MBps) 00:29:48.181 00:29:48.181 12:49:30 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:29:48.181 12:49:30 -- dd/basic_rw.sh@37 -- # gen_conf 00:29:48.181 12:49:30 -- dd/common.sh@31 -- # xtrace_disable 00:29:48.181 12:49:30 -- common/autotest_common.sh@10 -- # set +x 00:29:48.181 { 00:29:48.181 "subsystems": [ 00:29:48.181 { 00:29:48.181 "subsystem": "bdev", 00:29:48.181 "config": [ 00:29:48.181 { 00:29:48.181 "params": { 00:29:48.181 "trtype": "pcie", 00:29:48.181 "traddr": "0000:00:06.0", 00:29:48.181 "name": "Nvme0" 00:29:48.181 }, 00:29:48.181 "method": "bdev_nvme_attach_controller" 00:29:48.181 }, 00:29:48.181 { 00:29:48.181 "method": "bdev_wait_for_examine" 00:29:48.181 } 00:29:48.181 ] 00:29:48.181 } 00:29:48.181 ] 00:29:48.181 } 00:29:48.181 [2024-10-01 12:49:30.504533] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:48.181 [2024-10-01 12:49:30.504797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134223 ] 00:29:48.181 [2024-10-01 12:49:30.675782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.440 [2024-10-01 12:49:30.926649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.388  Copying: 48/48 [kB] (average 46 MBps) 00:29:50.388 00:29:50.388 12:49:32 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:50.388 12:49:32 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:29:50.388 12:49:32 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:50.388 12:49:32 -- dd/common.sh@11 -- # local nvme_ref= 00:29:50.388 12:49:32 -- dd/common.sh@12 -- # local size=49152 00:29:50.388 12:49:32 -- dd/common.sh@14 -- # local bs=1048576 00:29:50.388 12:49:32 -- dd/common.sh@15 -- # local count=1 00:29:50.388 12:49:32 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:50.388 12:49:32 -- dd/common.sh@18 -- # gen_conf 00:29:50.388 12:49:32 -- dd/common.sh@31 -- # xtrace_disable 00:29:50.388 12:49:32 -- common/autotest_common.sh@10 -- # set +x 00:29:50.388 { 00:29:50.388 "subsystems": [ 00:29:50.388 { 00:29:50.388 "subsystem": "bdev", 00:29:50.388 "config": [ 00:29:50.388 { 00:29:50.388 "params": { 00:29:50.388 "trtype": "pcie", 00:29:50.388 "traddr": "0000:00:06.0", 00:29:50.388 "name": "Nvme0" 00:29:50.388 }, 00:29:50.388 "method": "bdev_nvme_attach_controller" 00:29:50.388 }, 00:29:50.388 { 00:29:50.388 "method": "bdev_wait_for_examine" 00:29:50.388 } 00:29:50.388 ] 00:29:50.388 } 00:29:50.388 ] 00:29:50.388 } 00:29:50.388 [2024-10-01 12:49:32.725884] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:50.388 [2024-10-01 12:49:32.726060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134256 ] 00:29:50.388 [2024-10-01 12:49:32.897341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.648 [2024-10-01 12:49:33.143315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.594  Copying: 1024/1024 [kB] (average 500 MBps) 00:29:52.594 00:29:52.594 ************************************ 00:29:52.594 END TEST dd_rw 00:29:52.594 ************************************ 00:29:52.594 00:29:52.594 real 0m43.351s 00:29:52.594 user 0m35.872s 00:29:52.594 sys 0m6.213s 00:29:52.594 12:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.594 12:49:34 -- common/autotest_common.sh@10 -- # set +x 00:29:52.594 12:49:35 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:29:52.594 12:49:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:52.594 12:49:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:52.594 12:49:35 -- common/autotest_common.sh@10 -- # set +x 00:29:52.594 ************************************ 00:29:52.594 START TEST dd_rw_offset 00:29:52.594 ************************************ 00:29:52.594 12:49:35 -- common/autotest_common.sh@1104 -- # basic_offset 00:29:52.594 12:49:35 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:29:52.594 12:49:35 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:29:52.594 12:49:35 -- dd/common.sh@98 -- # xtrace_disable 00:29:52.594 12:49:35 -- common/autotest_common.sh@10 -- # set +x 00:29:52.854 12:49:35 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:29:52.855 12:49:35 -- dd/basic_rw.sh@56 -- # data=0u6mqhc5jzzaem9yv3eqshe69ndx8ljwv6s5yvudu6tbxshqsn7nn8mmp262fd23zgv7qztiggljp7677e7kexkkkhq4c7hkfp4ekofhm72gw62if0psu7dcrp0vno5zgxz7hfnghhedk1posxmmjb8tcrb4h83b6b9l1k4r9obirnr3mrx37c2b1q11n85rz46vyq5bo22ls5d86im7jp82ynusdd5aa9bk227yw6to68iftwbomi69fip4iejni01c2l9y8ddwrsmlhjltmgjju5kem0fothql2ilkeoelt4m5r3fo2fxxkwyec4f0wy70a9rau6wq4o511n50thmzmtir8fgke1zy6f5t58xjfvm5zcg6f11pah4megj87headmsljv8gg7ncris7cfh8559fse5amgm5uonjg2mnkfe847p4ibqh3f4uaav9lwrjavs3ovh0ciwv4g875r2ut7n6pzfpx8qcp9f317vai9569pt23recdqgvgyc38ix93txqk7esuzevkqrqqv6w647wtvwa49aql3yc2ci5v437uc8d9py0bnmskjl16z8roypbk6qjm9ve86zdj2m1chqpj9z381463avdmyypymp2so9br2mfzwzp8r0ug9tfdnwd66vn2ice0yi084m0svp776w7xzn6cr3nyjb615r29fl4pp6l0aawcy36gebj3zuojssgzkn8iwvx8up8bza03ajgxkc08kndfgwpuu5cf7cka04jvq588e37oz8sa0alefld103twlnx85h888niet2805s1b0kwd6806c1guexkh1f38ochx32yauravl92scrnzby4ekporpaxmwz1hfy32ba51z7jeucteykv4ozw75lh3s2b2d3amb7uvpvw6fa3px77ccsqgy9yebpbz4bf3h8tkomofouz1p32ye4squq0q1k8tg6xvui4gwo7gsuqtsl4og2ia0gv5iuz67vww40iaoxq5ud34yi7k39hjvm4871bofq5dunc8lkz8lboxc7dfk1n6y5ii58ic08nxz1gknubrh0mxwqecmrailetkf53u6j8y2ina1tn2vs9f6v6xtvbw481pr2o1w2u08k14jnj9qvg71mfwjziiag8939t2ag3qfdf5q7e223ulmuncghi4msm6tier0m3tq1bubpa8nw3sum8al5q43gthgfyzoluwr1knt0or4gbhtfss1rs5is1govf5lbp5j2rhmh15fft8iub1cu5a2si9w0sk4arc1fbv8x0hlm34orokomwdtbt2rubnvq7ljb9xky2i9on3xihlvq0ix3h9n00hmsu6cceyprgci7afh81nh3jad1qeachw7f4z9w89bggze8ok3n027n4qbvsmai79423eu66ikaapa38wycz2g1he367d1sld0lihzbygf789a9evtywrt6s4iqazqe1y65al7zftt7s8wr571yfn3uu538pblktq4v5lblm66jgokihetf5slmsq51k1tw1ogjh419lhhzrhj8wm3xj2er8vl2bnw3xhzq35bv1vwf6qg5j6iga9hjipiypkac344k8s07txlylhgyctw3gstlgr42etbs84r8pdhq00p2yn0ot57rti3aepv2o67vghfvjhazsg9sm814tvg8vo1yvfe4l4sdgchf45w9j0re42w24bh1ckddvfhf55x89mzmb7e69ps4r4468oc9jf6n6bgnibbkdvgs1p99tb9nw9jhvv08ddje2ry45a2mda2gztzzagx9qm6r3cajcv7gn6ui7svzikwne85h5v68xkdj8deo7tmeubuwnb3xetulmqcwh98q05jvfinqct56194mr6ea9w1ea14arbtwjygzqp65vpwiknc1epr8fmv7lhcgv35s9f5294bfc2q3pnqmsgh38y3pr8os77mdkm9n1a63pmk8jfdgl4ttmhslk8k2mmfnhrypyzgoso5z5u2fqlze1hhqv98g9jjqaxklocjtdy8596qfu1a8w6erf59e6l80to8xsca9b7dxz7ussp2qe0i8git54mkn1ppv5s6cu0nl7aaskdhbqjcq5m1xu2nzehscnbnq9ovp631e0eaxl01kjazvo4f3m8out6zs2ntfdrq773o14cama0pt4zzsy21f0z4psj27yhwcr2vg7u675616i7qfn8lfc83v5b9k4tymvowc91yewfp0ddl7zos9ab4ts1ro53tv9wrbxt3rkcua3aln6sw8q19si6vh9r4y18nm9gn25amfigxazd5d12qc4rna86uodhq4va103dhhfz92ta59s2sajx9oze6w5hywbvwyyzdpp5kmqz8k7snxfbp3jyz9satyvcvkomaqtlayx6mx7k08yh6zpwt4t7fezt0kj4htu64x3h108y7tpwtn138h0xpzsoda6d0v6p9opqkmflivd3myfbxr0r98ilh2j2xsmk9d45i882jpjdbsnr21fm1iqdkvsrevafqfmm8dbcijo28sovf9r6jgdy8wg9hn9cv8zqb3nq6kcvm1y5zv3vuwtsrbqsyfy8tf2oqrj55omqyxhi9qa5pnoxqps78qj6f9v5eqk9fz4k1nv7qarck65r18rrkieffe6uobz3bls79u0hf2pth91gawclgxq2lufe09ao9buf1rsp9bie28mldc252erfg6lc4vdy53zb1o60c1rwq3ng8vpd2ewfqj6p7eyxft585mhqsv32uaigaicttss0tobj2uxlifrgjhpxs0y09jpt7c6997i4j7cpahm7e3619as3dkpvue7j15gtebzchltwp9amlu4m3ryvp5n4wn4kwnp956p998fifhvp5ma4fol9050fjv825rqmb0wwnp0p75ykkxscgfo8cbhz498e6w78zlgg9osv2x3qn3e5w0gs7uintj1ki14zjxwm1d4czc45oh0u4clsgunsfc105izwhowf85lqo3nwduq8xkcyo3xf3bb5gw95sgpz0bryxmmy1tptfeabrjkns4f1s5qxyoie4djsj2foro2wm8tsakmlkw9roy0fv4pezpdpg54zcavmw06z3jvxaa1oaugo9kmauyjxiow1zdhs19rau7vgcqhiqrwub60rugjaksf1jis6lfca9atgf6axzgdx8gipx0gyw8h6hk0ua23lq2svqiqy3jlw96j82t3oj0aaqvil8l25fvtjm3axqqtjl9td4iyihh6hpym3qoe2wpudnwjyicz6e7aackurxvdgqa5zkwskwbl0btstpjuqzkw7qnvw8ihu9hee9kcvf0721nuycmee89r9781jcakhv1ozmd24425iqrfupih986dn0cubhf5evt8cfdnv3mtrtaml1x36130j9doyveturmt47nmm1ei5rynfud4s4xdlge88bbuwfuaobpvoe2k6rv3stun1v5ku47neo7cbolaa0hdtrpbn64m0icz809dqrm00avv4n2nkaxnd8rtfpwudod3vp43fk04wh8diy1w9ccql9kujxo3s9knf468roo4zkf3rhq3y3139e2jh211dim86v1v6ywldr6l7w8wol9phg3g3qzce8oagjhhhjpninvejlvtbewnebjl0h9ektyn7wnhjj12rlwrqbxt1yfwy9ylae83puk385r3gl70iubjpj23i4mci9z8v294va2rikzycma3e8nodpxfk7k81zme8k68pzyo1wb4rrdzbeetx7vvxo9vpmsgqt9xrctuppi0hvuum19a55r649mrf1bof9d5v2m56qzo8jrpupvhhqucdimsm4kpocblumkj412rxb7u7xcnwxviu4hrxz5utgqwl4cgwwh257e638qyh0koiqbaaab30rn880bzk32oikapkl8i2qzz1qjagrqxgowu56iysnc8qkt1a2c5jfn2kuqgj33eeipoysoj88bdqgkn77utx2df47binu93kwfm89cqa5c6kwc9vd2e2ebkokgt55hxardx1ol7p08y6p1h8ya9nyv4yqjnxep1ne56shr4 00:29:52.855 12:49:35 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:29:52.855 12:49:35 -- dd/basic_rw.sh@59 -- # gen_conf 00:29:52.855 12:49:35 -- dd/common.sh@31 -- # xtrace_disable 00:29:52.855 12:49:35 -- common/autotest_common.sh@10 -- # set +x 00:29:52.855 { 00:29:52.855 "subsystems": [ 00:29:52.855 { 00:29:52.855 "subsystem": "bdev", 00:29:52.855 "config": [ 00:29:52.855 { 00:29:52.855 "params": { 00:29:52.855 "trtype": "pcie", 00:29:52.855 "traddr": "0000:00:06.0", 00:29:52.855 "name": "Nvme0" 00:29:52.855 }, 00:29:52.855 "method": "bdev_nvme_attach_controller" 00:29:52.855 }, 00:29:52.855 { 00:29:52.855 "method": "bdev_wait_for_examine" 00:29:52.855 } 00:29:52.855 ] 00:29:52.855 } 00:29:52.855 ] 00:29:52.855 } 00:29:52.855 [2024-10-01 12:49:35.201953] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:52.855 [2024-10-01 12:49:35.202109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134315 ] 00:29:52.855 [2024-10-01 12:49:35.367765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.114 [2024-10-01 12:49:35.617380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.060  Copying: 4096/4096 [B] (average 4000 kBps) 00:29:55.060 00:29:55.060 12:49:37 -- dd/basic_rw.sh@65 -- # gen_conf 00:29:55.060 12:49:37 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:29:55.060 12:49:37 -- dd/common.sh@31 -- # xtrace_disable 00:29:55.060 12:49:37 -- common/autotest_common.sh@10 -- # set +x 00:29:55.060 { 00:29:55.060 "subsystems": [ 00:29:55.060 { 00:29:55.060 "subsystem": "bdev", 00:29:55.060 "config": [ 00:29:55.060 { 00:29:55.060 "params": { 00:29:55.060 "trtype": "pcie", 00:29:55.060 "traddr": "0000:00:06.0", 00:29:55.060 "name": "Nvme0" 00:29:55.060 }, 00:29:55.060 "method": "bdev_nvme_attach_controller" 00:29:55.060 }, 00:29:55.060 { 00:29:55.060 "method": "bdev_wait_for_examine" 00:29:55.060 } 00:29:55.060 ] 00:29:55.060 } 00:29:55.060 ] 00:29:55.060 } 00:29:55.060 [2024-10-01 12:49:37.425776] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:55.060 [2024-10-01 12:49:37.425942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134351 ] 00:29:55.319 [2024-10-01 12:49:37.595268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.319 [2024-10-01 12:49:37.833768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.288  Copying: 4096/4096 [B] (average 4000 kBps) 00:29:57.288 00:29:57.288 12:49:39 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:29:57.288 ************************************ 00:29:57.288 END TEST dd_rw_offset 00:29:57.288 ************************************ 00:29:57.289 12:49:39 -- dd/basic_rw.sh@72 -- # [[ 0u6mqhc5jzzaem9yv3eqshe69ndx8ljwv6s5yvudu6tbxshqsn7nn8mmp262fd23zgv7qztiggljp7677e7kexkkkhq4c7hkfp4ekofhm72gw62if0psu7dcrp0vno5zgxz7hfnghhedk1posxmmjb8tcrb4h83b6b9l1k4r9obirnr3mrx37c2b1q11n85rz46vyq5bo22ls5d86im7jp82ynusdd5aa9bk227yw6to68iftwbomi69fip4iejni01c2l9y8ddwrsmlhjltmgjju5kem0fothql2ilkeoelt4m5r3fo2fxxkwyec4f0wy70a9rau6wq4o511n50thmzmtir8fgke1zy6f5t58xjfvm5zcg6f11pah4megj87headmsljv8gg7ncris7cfh8559fse5amgm5uonjg2mnkfe847p4ibqh3f4uaav9lwrjavs3ovh0ciwv4g875r2ut7n6pzfpx8qcp9f317vai9569pt23recdqgvgyc38ix93txqk7esuzevkqrqqv6w647wtvwa49aql3yc2ci5v437uc8d9py0bnmskjl16z8roypbk6qjm9ve86zdj2m1chqpj9z381463avdmyypymp2so9br2mfzwzp8r0ug9tfdnwd66vn2ice0yi084m0svp776w7xzn6cr3nyjb615r29fl4pp6l0aawcy36gebj3zuojssgzkn8iwvx8up8bza03ajgxkc08kndfgwpuu5cf7cka04jvq588e37oz8sa0alefld103twlnx85h888niet2805s1b0kwd6806c1guexkh1f38ochx32yauravl92scrnzby4ekporpaxmwz1hfy32ba51z7jeucteykv4ozw75lh3s2b2d3amb7uvpvw6fa3px77ccsqgy9yebpbz4bf3h8tkomofouz1p32ye4squq0q1k8tg6xvui4gwo7gsuqtsl4og2ia0gv5iuz67vww40iaoxq5ud34yi7k39hjvm4871bofq5dunc8lkz8lboxc7dfk1n6y5ii58ic08nxz1gknubrh0mxwqecmrailetkf53u6j8y2ina1tn2vs9f6v6xtvbw481pr2o1w2u08k14jnj9qvg71mfwjziiag8939t2ag3qfdf5q7e223ulmuncghi4msm6tier0m3tq1bubpa8nw3sum8al5q43gthgfyzoluwr1knt0or4gbhtfss1rs5is1govf5lbp5j2rhmh15fft8iub1cu5a2si9w0sk4arc1fbv8x0hlm34orokomwdtbt2rubnvq7ljb9xky2i9on3xihlvq0ix3h9n00hmsu6cceyprgci7afh81nh3jad1qeachw7f4z9w89bggze8ok3n027n4qbvsmai79423eu66ikaapa38wycz2g1he367d1sld0lihzbygf789a9evtywrt6s4iqazqe1y65al7zftt7s8wr571yfn3uu538pblktq4v5lblm66jgokihetf5slmsq51k1tw1ogjh419lhhzrhj8wm3xj2er8vl2bnw3xhzq35bv1vwf6qg5j6iga9hjipiypkac344k8s07txlylhgyctw3gstlgr42etbs84r8pdhq00p2yn0ot57rti3aepv2o67vghfvjhazsg9sm814tvg8vo1yvfe4l4sdgchf45w9j0re42w24bh1ckddvfhf55x89mzmb7e69ps4r4468oc9jf6n6bgnibbkdvgs1p99tb9nw9jhvv08ddje2ry45a2mda2gztzzagx9qm6r3cajcv7gn6ui7svzikwne85h5v68xkdj8deo7tmeubuwnb3xetulmqcwh98q05jvfinqct56194mr6ea9w1ea14arbtwjygzqp65vpwiknc1epr8fmv7lhcgv35s9f5294bfc2q3pnqmsgh38y3pr8os77mdkm9n1a63pmk8jfdgl4ttmhslk8k2mmfnhrypyzgoso5z5u2fqlze1hhqv98g9jjqaxklocjtdy8596qfu1a8w6erf59e6l80to8xsca9b7dxz7ussp2qe0i8git54mkn1ppv5s6cu0nl7aaskdhbqjcq5m1xu2nzehscnbnq9ovp631e0eaxl01kjazvo4f3m8out6zs2ntfdrq773o14cama0pt4zzsy21f0z4psj27yhwcr2vg7u675616i7qfn8lfc83v5b9k4tymvowc91yewfp0ddl7zos9ab4ts1ro53tv9wrbxt3rkcua3aln6sw8q19si6vh9r4y18nm9gn25amfigxazd5d12qc4rna86uodhq4va103dhhfz92ta59s2sajx9oze6w5hywbvwyyzdpp5kmqz8k7snxfbp3jyz9satyvcvkomaqtlayx6mx7k08yh6zpwt4t7fezt0kj4htu64x3h108y7tpwtn138h0xpzsoda6d0v6p9opqkmflivd3myfbxr0r98ilh2j2xsmk9d45i882jpjdbsnr21fm1iqdkvsrevafqfmm8dbcijo28sovf9r6jgdy8wg9hn9cv8zqb3nq6kcvm1y5zv3vuwtsrbqsyfy8tf2oqrj55omqyxhi9qa5pnoxqps78qj6f9v5eqk9fz4k1nv7qarck65r18rrkieffe6uobz3bls79u0hf2pth91gawclgxq2lufe09ao9buf1rsp9bie28mldc252erfg6lc4vdy53zb1o60c1rwq3ng8vpd2ewfqj6p7eyxft585mhqsv32uaigaicttss0tobj2uxlifrgjhpxs0y09jpt7c6997i4j7cpahm7e3619as3dkpvue7j15gtebzchltwp9amlu4m3ryvp5n4wn4kwnp956p998fifhvp5ma4fol9050fjv825rqmb0wwnp0p75ykkxscgfo8cbhz498e6w78zlgg9osv2x3qn3e5w0gs7uintj1ki14zjxwm1d4czc45oh0u4clsgunsfc105izwhowf85lqo3nwduq8xkcyo3xf3bb5gw95sgpz0bryxmmy1tptfeabrjkns4f1s5qxyoie4djsj2foro2wm8tsakmlkw9roy0fv4pezpdpg54zcavmw06z3jvxaa1oaugo9kmauyjxiow1zdhs19rau7vgcqhiqrwub60rugjaksf1jis6lfca9atgf6axzgdx8gipx0gyw8h6hk0ua23lq2svqiqy3jlw96j82t3oj0aaqvil8l25fvtjm3axqqtjl9td4iyihh6hpym3qoe2wpudnwjyicz6e7aackurxvdgqa5zkwskwbl0btstpjuqzkw7qnvw8ihu9hee9kcvf0721nuycmee89r9781jcakhv1ozmd24425iqrfupih986dn0cubhf5evt8cfdnv3mtrtaml1x36130j9doyveturmt47nmm1ei5rynfud4s4xdlge88bbuwfuaobpvoe2k6rv3stun1v5ku47neo7cbolaa0hdtrpbn64m0icz809dqrm00avv4n2nkaxnd8rtfpwudod3vp43fk04wh8diy1w9ccql9kujxo3s9knf468roo4zkf3rhq3y3139e2jh211dim86v1v6ywldr6l7w8wol9phg3g3qzce8oagjhhhjpninvejlvtbewnebjl0h9ektyn7wnhjj12rlwrqbxt1yfwy9ylae83puk385r3gl70iubjpj23i4mci9z8v294va2rikzycma3e8nodpxfk7k81zme8k68pzyo1wb4rrdzbeetx7vvxo9vpmsgqt9xrctuppi0hvuum19a55r649mrf1bof9d5v2m56qzo8jrpupvhhqucdimsm4kpocblumkj412rxb7u7xcnwxviu4hrxz5utgqwl4cgwwh257e638qyh0koiqbaaab30rn880bzk32oikapkl8i2qzz1qjagrqxgowu56iysnc8qkt1a2c5jfn2kuqgj33eeipoysoj88bdqgkn77utx2df47binu93kwfm89cqa5c6kwc9vd2e2ebkokgt55hxardx1ol7p08y6p1h8ya9nyv4yqjnxep1ne56shr4 == \0\u\6\m\q\h\c\5\j\z\z\a\e\m\9\y\v\3\e\q\s\h\e\6\9\n\d\x\8\l\j\w\v\6\s\5\y\v\u\d\u\6\t\b\x\s\h\q\s\n\7\n\n\8\m\m\p\2\6\2\f\d\2\3\z\g\v\7\q\z\t\i\g\g\l\j\p\7\6\7\7\e\7\k\e\x\k\k\k\h\q\4\c\7\h\k\f\p\4\e\k\o\f\h\m\7\2\g\w\6\2\i\f\0\p\s\u\7\d\c\r\p\0\v\n\o\5\z\g\x\z\7\h\f\n\g\h\h\e\d\k\1\p\o\s\x\m\m\j\b\8\t\c\r\b\4\h\8\3\b\6\b\9\l\1\k\4\r\9\o\b\i\r\n\r\3\m\r\x\3\7\c\2\b\1\q\1\1\n\8\5\r\z\4\6\v\y\q\5\b\o\2\2\l\s\5\d\8\6\i\m\7\j\p\8\2\y\n\u\s\d\d\5\a\a\9\b\k\2\2\7\y\w\6\t\o\6\8\i\f\t\w\b\o\m\i\6\9\f\i\p\4\i\e\j\n\i\0\1\c\2\l\9\y\8\d\d\w\r\s\m\l\h\j\l\t\m\g\j\j\u\5\k\e\m\0\f\o\t\h\q\l\2\i\l\k\e\o\e\l\t\4\m\5\r\3\f\o\2\f\x\x\k\w\y\e\c\4\f\0\w\y\7\0\a\9\r\a\u\6\w\q\4\o\5\1\1\n\5\0\t\h\m\z\m\t\i\r\8\f\g\k\e\1\z\y\6\f\5\t\5\8\x\j\f\v\m\5\z\c\g\6\f\1\1\p\a\h\4\m\e\g\j\8\7\h\e\a\d\m\s\l\j\v\8\g\g\7\n\c\r\i\s\7\c\f\h\8\5\5\9\f\s\e\5\a\m\g\m\5\u\o\n\j\g\2\m\n\k\f\e\8\4\7\p\4\i\b\q\h\3\f\4\u\a\a\v\9\l\w\r\j\a\v\s\3\o\v\h\0\c\i\w\v\4\g\8\7\5\r\2\u\t\7\n\6\p\z\f\p\x\8\q\c\p\9\f\3\1\7\v\a\i\9\5\6\9\p\t\2\3\r\e\c\d\q\g\v\g\y\c\3\8\i\x\9\3\t\x\q\k\7\e\s\u\z\e\v\k\q\r\q\q\v\6\w\6\4\7\w\t\v\w\a\4\9\a\q\l\3\y\c\2\c\i\5\v\4\3\7\u\c\8\d\9\p\y\0\b\n\m\s\k\j\l\1\6\z\8\r\o\y\p\b\k\6\q\j\m\9\v\e\8\6\z\d\j\2\m\1\c\h\q\p\j\9\z\3\8\1\4\6\3\a\v\d\m\y\y\p\y\m\p\2\s\o\9\b\r\2\m\f\z\w\z\p\8\r\0\u\g\9\t\f\d\n\w\d\6\6\v\n\2\i\c\e\0\y\i\0\8\4\m\0\s\v\p\7\7\6\w\7\x\z\n\6\c\r\3\n\y\j\b\6\1\5\r\2\9\f\l\4\p\p\6\l\0\a\a\w\c\y\3\6\g\e\b\j\3\z\u\o\j\s\s\g\z\k\n\8\i\w\v\x\8\u\p\8\b\z\a\0\3\a\j\g\x\k\c\0\8\k\n\d\f\g\w\p\u\u\5\c\f\7\c\k\a\0\4\j\v\q\5\8\8\e\3\7\o\z\8\s\a\0\a\l\e\f\l\d\1\0\3\t\w\l\n\x\8\5\h\8\8\8\n\i\e\t\2\8\0\5\s\1\b\0\k\w\d\6\8\0\6\c\1\g\u\e\x\k\h\1\f\3\8\o\c\h\x\3\2\y\a\u\r\a\v\l\9\2\s\c\r\n\z\b\y\4\e\k\p\o\r\p\a\x\m\w\z\1\h\f\y\3\2\b\a\5\1\z\7\j\e\u\c\t\e\y\k\v\4\o\z\w\7\5\l\h\3\s\2\b\2\d\3\a\m\b\7\u\v\p\v\w\6\f\a\3\p\x\7\7\c\c\s\q\g\y\9\y\e\b\p\b\z\4\b\f\3\h\8\t\k\o\m\o\f\o\u\z\1\p\3\2\y\e\4\s\q\u\q\0\q\1\k\8\t\g\6\x\v\u\i\4\g\w\o\7\g\s\u\q\t\s\l\4\o\g\2\i\a\0\g\v\5\i\u\z\6\7\v\w\w\4\0\i\a\o\x\q\5\u\d\3\4\y\i\7\k\3\9\h\j\v\m\4\8\7\1\b\o\f\q\5\d\u\n\c\8\l\k\z\8\l\b\o\x\c\7\d\f\k\1\n\6\y\5\i\i\5\8\i\c\0\8\n\x\z\1\g\k\n\u\b\r\h\0\m\x\w\q\e\c\m\r\a\i\l\e\t\k\f\5\3\u\6\j\8\y\2\i\n\a\1\t\n\2\v\s\9\f\6\v\6\x\t\v\b\w\4\8\1\p\r\2\o\1\w\2\u\0\8\k\1\4\j\n\j\9\q\v\g\7\1\m\f\w\j\z\i\i\a\g\8\9\3\9\t\2\a\g\3\q\f\d\f\5\q\7\e\2\2\3\u\l\m\u\n\c\g\h\i\4\m\s\m\6\t\i\e\r\0\m\3\t\q\1\b\u\b\p\a\8\n\w\3\s\u\m\8\a\l\5\q\4\3\g\t\h\g\f\y\z\o\l\u\w\r\1\k\n\t\0\o\r\4\g\b\h\t\f\s\s\1\r\s\5\i\s\1\g\o\v\f\5\l\b\p\5\j\2\r\h\m\h\1\5\f\f\t\8\i\u\b\1\c\u\5\a\2\s\i\9\w\0\s\k\4\a\r\c\1\f\b\v\8\x\0\h\l\m\3\4\o\r\o\k\o\m\w\d\t\b\t\2\r\u\b\n\v\q\7\l\j\b\9\x\k\y\2\i\9\o\n\3\x\i\h\l\v\q\0\i\x\3\h\9\n\0\0\h\m\s\u\6\c\c\e\y\p\r\g\c\i\7\a\f\h\8\1\n\h\3\j\a\d\1\q\e\a\c\h\w\7\f\4\z\9\w\8\9\b\g\g\z\e\8\o\k\3\n\0\2\7\n\4\q\b\v\s\m\a\i\7\9\4\2\3\e\u\6\6\i\k\a\a\p\a\3\8\w\y\c\z\2\g\1\h\e\3\6\7\d\1\s\l\d\0\l\i\h\z\b\y\g\f\7\8\9\a\9\e\v\t\y\w\r\t\6\s\4\i\q\a\z\q\e\1\y\6\5\a\l\7\z\f\t\t\7\s\8\w\r\5\7\1\y\f\n\3\u\u\5\3\8\p\b\l\k\t\q\4\v\5\l\b\l\m\6\6\j\g\o\k\i\h\e\t\f\5\s\l\m\s\q\5\1\k\1\t\w\1\o\g\j\h\4\1\9\l\h\h\z\r\h\j\8\w\m\3\x\j\2\e\r\8\v\l\2\b\n\w\3\x\h\z\q\3\5\b\v\1\v\w\f\6\q\g\5\j\6\i\g\a\9\h\j\i\p\i\y\p\k\a\c\3\4\4\k\8\s\0\7\t\x\l\y\l\h\g\y\c\t\w\3\g\s\t\l\g\r\4\2\e\t\b\s\8\4\r\8\p\d\h\q\0\0\p\2\y\n\0\o\t\5\7\r\t\i\3\a\e\p\v\2\o\6\7\v\g\h\f\v\j\h\a\z\s\g\9\s\m\8\1\4\t\v\g\8\v\o\1\y\v\f\e\4\l\4\s\d\g\c\h\f\4\5\w\9\j\0\r\e\4\2\w\2\4\b\h\1\c\k\d\d\v\f\h\f\5\5\x\8\9\m\z\m\b\7\e\6\9\p\s\4\r\4\4\6\8\o\c\9\j\f\6\n\6\b\g\n\i\b\b\k\d\v\g\s\1\p\9\9\t\b\9\n\w\9\j\h\v\v\0\8\d\d\j\e\2\r\y\4\5\a\2\m\d\a\2\g\z\t\z\z\a\g\x\9\q\m\6\r\3\c\a\j\c\v\7\g\n\6\u\i\7\s\v\z\i\k\w\n\e\8\5\h\5\v\6\8\x\k\d\j\8\d\e\o\7\t\m\e\u\b\u\w\n\b\3\x\e\t\u\l\m\q\c\w\h\9\8\q\0\5\j\v\f\i\n\q\c\t\5\6\1\9\4\m\r\6\e\a\9\w\1\e\a\1\4\a\r\b\t\w\j\y\g\z\q\p\6\5\v\p\w\i\k\n\c\1\e\p\r\8\f\m\v\7\l\h\c\g\v\3\5\s\9\f\5\2\9\4\b\f\c\2\q\3\p\n\q\m\s\g\h\3\8\y\3\p\r\8\o\s\7\7\m\d\k\m\9\n\1\a\6\3\p\m\k\8\j\f\d\g\l\4\t\t\m\h\s\l\k\8\k\2\m\m\f\n\h\r\y\p\y\z\g\o\s\o\5\z\5\u\2\f\q\l\z\e\1\h\h\q\v\9\8\g\9\j\j\q\a\x\k\l\o\c\j\t\d\y\8\5\9\6\q\f\u\1\a\8\w\6\e\r\f\5\9\e\6\l\8\0\t\o\8\x\s\c\a\9\b\7\d\x\z\7\u\s\s\p\2\q\e\0\i\8\g\i\t\5\4\m\k\n\1\p\p\v\5\s\6\c\u\0\n\l\7\a\a\s\k\d\h\b\q\j\c\q\5\m\1\x\u\2\n\z\e\h\s\c\n\b\n\q\9\o\v\p\6\3\1\e\0\e\a\x\l\0\1\k\j\a\z\v\o\4\f\3\m\8\o\u\t\6\z\s\2\n\t\f\d\r\q\7\7\3\o\1\4\c\a\m\a\0\p\t\4\z\z\s\y\2\1\f\0\z\4\p\s\j\2\7\y\h\w\c\r\2\v\g\7\u\6\7\5\6\1\6\i\7\q\f\n\8\l\f\c\8\3\v\5\b\9\k\4\t\y\m\v\o\w\c\9\1\y\e\w\f\p\0\d\d\l\7\z\o\s\9\a\b\4\t\s\1\r\o\5\3\t\v\9\w\r\b\x\t\3\r\k\c\u\a\3\a\l\n\6\s\w\8\q\1\9\s\i\6\v\h\9\r\4\y\1\8\n\m\9\g\n\2\5\a\m\f\i\g\x\a\z\d\5\d\1\2\q\c\4\r\n\a\8\6\u\o\d\h\q\4\v\a\1\0\3\d\h\h\f\z\9\2\t\a\5\9\s\2\s\a\j\x\9\o\z\e\6\w\5\h\y\w\b\v\w\y\y\z\d\p\p\5\k\m\q\z\8\k\7\s\n\x\f\b\p\3\j\y\z\9\s\a\t\y\v\c\v\k\o\m\a\q\t\l\a\y\x\6\m\x\7\k\0\8\y\h\6\z\p\w\t\4\t\7\f\e\z\t\0\k\j\4\h\t\u\6\4\x\3\h\1\0\8\y\7\t\p\w\t\n\1\3\8\h\0\x\p\z\s\o\d\a\6\d\0\v\6\p\9\o\p\q\k\m\f\l\i\v\d\3\m\y\f\b\x\r\0\r\9\8\i\l\h\2\j\2\x\s\m\k\9\d\4\5\i\8\8\2\j\p\j\d\b\s\n\r\2\1\f\m\1\i\q\d\k\v\s\r\e\v\a\f\q\f\m\m\8\d\b\c\i\j\o\2\8\s\o\v\f\9\r\6\j\g\d\y\8\w\g\9\h\n\9\c\v\8\z\q\b\3\n\q\6\k\c\v\m\1\y\5\z\v\3\v\u\w\t\s\r\b\q\s\y\f\y\8\t\f\2\o\q\r\j\5\5\o\m\q\y\x\h\i\9\q\a\5\p\n\o\x\q\p\s\7\8\q\j\6\f\9\v\5\e\q\k\9\f\z\4\k\1\n\v\7\q\a\r\c\k\6\5\r\1\8\r\r\k\i\e\f\f\e\6\u\o\b\z\3\b\l\s\7\9\u\0\h\f\2\p\t\h\9\1\g\a\w\c\l\g\x\q\2\l\u\f\e\0\9\a\o\9\b\u\f\1\r\s\p\9\b\i\e\2\8\m\l\d\c\2\5\2\e\r\f\g\6\l\c\4\v\d\y\5\3\z\b\1\o\6\0\c\1\r\w\q\3\n\g\8\v\p\d\2\e\w\f\q\j\6\p\7\e\y\x\f\t\5\8\5\m\h\q\s\v\3\2\u\a\i\g\a\i\c\t\t\s\s\0\t\o\b\j\2\u\x\l\i\f\r\g\j\h\p\x\s\0\y\0\9\j\p\t\7\c\6\9\9\7\i\4\j\7\c\p\a\h\m\7\e\3\6\1\9\a\s\3\d\k\p\v\u\e\7\j\1\5\g\t\e\b\z\c\h\l\t\w\p\9\a\m\l\u\4\m\3\r\y\v\p\5\n\4\w\n\4\k\w\n\p\9\5\6\p\9\9\8\f\i\f\h\v\p\5\m\a\4\f\o\l\9\0\5\0\f\j\v\8\2\5\r\q\m\b\0\w\w\n\p\0\p\7\5\y\k\k\x\s\c\g\f\o\8\c\b\h\z\4\9\8\e\6\w\7\8\z\l\g\g\9\o\s\v\2\x\3\q\n\3\e\5\w\0\g\s\7\u\i\n\t\j\1\k\i\1\4\z\j\x\w\m\1\d\4\c\z\c\4\5\o\h\0\u\4\c\l\s\g\u\n\s\f\c\1\0\5\i\z\w\h\o\w\f\8\5\l\q\o\3\n\w\d\u\q\8\x\k\c\y\o\3\x\f\3\b\b\5\g\w\9\5\s\g\p\z\0\b\r\y\x\m\m\y\1\t\p\t\f\e\a\b\r\j\k\n\s\4\f\1\s\5\q\x\y\o\i\e\4\d\j\s\j\2\f\o\r\o\2\w\m\8\t\s\a\k\m\l\k\w\9\r\o\y\0\f\v\4\p\e\z\p\d\p\g\5\4\z\c\a\v\m\w\0\6\z\3\j\v\x\a\a\1\o\a\u\g\o\9\k\m\a\u\y\j\x\i\o\w\1\z\d\h\s\1\9\r\a\u\7\v\g\c\q\h\i\q\r\w\u\b\6\0\r\u\g\j\a\k\s\f\1\j\i\s\6\l\f\c\a\9\a\t\g\f\6\a\x\z\g\d\x\8\g\i\p\x\0\g\y\w\8\h\6\h\k\0\u\a\2\3\l\q\2\s\v\q\i\q\y\3\j\l\w\9\6\j\8\2\t\3\o\j\0\a\a\q\v\i\l\8\l\2\5\f\v\t\j\m\3\a\x\q\q\t\j\l\9\t\d\4\i\y\i\h\h\6\h\p\y\m\3\q\o\e\2\w\p\u\d\n\w\j\y\i\c\z\6\e\7\a\a\c\k\u\r\x\v\d\g\q\a\5\z\k\w\s\k\w\b\l\0\b\t\s\t\p\j\u\q\z\k\w\7\q\n\v\w\8\i\h\u\9\h\e\e\9\k\c\v\f\0\7\2\1\n\u\y\c\m\e\e\8\9\r\9\7\8\1\j\c\a\k\h\v\1\o\z\m\d\2\4\4\2\5\i\q\r\f\u\p\i\h\9\8\6\d\n\0\c\u\b\h\f\5\e\v\t\8\c\f\d\n\v\3\m\t\r\t\a\m\l\1\x\3\6\1\3\0\j\9\d\o\y\v\e\t\u\r\m\t\4\7\n\m\m\1\e\i\5\r\y\n\f\u\d\4\s\4\x\d\l\g\e\8\8\b\b\u\w\f\u\a\o\b\p\v\o\e\2\k\6\r\v\3\s\t\u\n\1\v\5\k\u\4\7\n\e\o\7\c\b\o\l\a\a\0\h\d\t\r\p\b\n\6\4\m\0\i\c\z\8\0\9\d\q\r\m\0\0\a\v\v\4\n\2\n\k\a\x\n\d\8\r\t\f\p\w\u\d\o\d\3\v\p\4\3\f\k\0\4\w\h\8\d\i\y\1\w\9\c\c\q\l\9\k\u\j\x\o\3\s\9\k\n\f\4\6\8\r\o\o\4\z\k\f\3\r\h\q\3\y\3\1\3\9\e\2\j\h\2\1\1\d\i\m\8\6\v\1\v\6\y\w\l\d\r\6\l\7\w\8\w\o\l\9\p\h\g\3\g\3\q\z\c\e\8\o\a\g\j\h\h\h\j\p\n\i\n\v\e\j\l\v\t\b\e\w\n\e\b\j\l\0\h\9\e\k\t\y\n\7\w\n\h\j\j\1\2\r\l\w\r\q\b\x\t\1\y\f\w\y\9\y\l\a\e\8\3\p\u\k\3\8\5\r\3\g\l\7\0\i\u\b\j\p\j\2\3\i\4\m\c\i\9\z\8\v\2\9\4\v\a\2\r\i\k\z\y\c\m\a\3\e\8\n\o\d\p\x\f\k\7\k\8\1\z\m\e\8\k\6\8\p\z\y\o\1\w\b\4\r\r\d\z\b\e\e\t\x\7\v\v\x\o\9\v\p\m\s\g\q\t\9\x\r\c\t\u\p\p\i\0\h\v\u\u\m\1\9\a\5\5\r\6\4\9\m\r\f\1\b\o\f\9\d\5\v\2\m\5\6\q\z\o\8\j\r\p\u\p\v\h\h\q\u\c\d\i\m\s\m\4\k\p\o\c\b\l\u\m\k\j\4\1\2\r\x\b\7\u\7\x\c\n\w\x\v\i\u\4\h\r\x\z\5\u\t\g\q\w\l\4\c\g\w\w\h\2\5\7\e\6\3\8\q\y\h\0\k\o\i\q\b\a\a\a\b\3\0\r\n\8\8\0\b\z\k\3\2\o\i\k\a\p\k\l\8\i\2\q\z\z\1\q\j\a\g\r\q\x\g\o\w\u\5\6\i\y\s\n\c\8\q\k\t\1\a\2\c\5\j\f\n\2\k\u\q\g\j\3\3\e\e\i\p\o\y\s\o\j\8\8\b\d\q\g\k\n\7\7\u\t\x\2\d\f\4\7\b\i\n\u\9\3\k\w\f\m\8\9\c\q\a\5\c\6\k\w\c\9\v\d\2\e\2\e\b\k\o\k\g\t\5\5\h\x\a\r\d\x\1\o\l\7\p\0\8\y\6\p\1\h\8\y\a\9\n\y\v\4\y\q\j\n\x\e\p\1\n\e\5\6\s\h\r\4 ]] 00:29:57.289 00:29:57.289 real 0m4.618s 00:29:57.289 user 0m3.798s 00:29:57.289 sys 0m0.670s 00:29:57.289 12:49:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.289 12:49:39 -- common/autotest_common.sh@10 -- # set +x 00:29:57.289 12:49:39 -- dd/basic_rw.sh@1 -- # cleanup 00:29:57.289 12:49:39 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:29:57.289 12:49:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:57.289 12:49:39 -- dd/common.sh@11 -- # local nvme_ref= 00:29:57.289 12:49:39 -- dd/common.sh@12 -- # local size=0xffff 00:29:57.289 12:49:39 -- dd/common.sh@14 -- # local bs=1048576 00:29:57.289 12:49:39 -- dd/common.sh@15 -- # local count=1 00:29:57.289 12:49:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:29:57.289 12:49:39 -- dd/common.sh@18 -- # gen_conf 00:29:57.289 12:49:39 -- dd/common.sh@31 -- # xtrace_disable 00:29:57.289 12:49:39 -- common/autotest_common.sh@10 -- # set +x 00:29:57.549 { 00:29:57.549 "subsystems": [ 00:29:57.549 { 00:29:57.549 "subsystem": "bdev", 00:29:57.549 "config": [ 00:29:57.549 { 00:29:57.549 "params": { 00:29:57.549 "trtype": "pcie", 00:29:57.549 "traddr": "0000:00:06.0", 00:29:57.549 "name": "Nvme0" 00:29:57.549 }, 00:29:57.549 "method": "bdev_nvme_attach_controller" 00:29:57.549 }, 00:29:57.549 { 00:29:57.549 "method": "bdev_wait_for_examine" 00:29:57.549 } 00:29:57.549 ] 00:29:57.549 } 00:29:57.549 ] 00:29:57.549 } 00:29:57.549 [2024-10-01 12:49:39.843013] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:57.549 [2024-10-01 12:49:39.843199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134401 ] 00:29:57.549 [2024-10-01 12:49:40.018123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.808 [2024-10-01 12:49:40.262750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.824  Copying: 1024/1024 [kB] (average 1000 MBps) 00:29:59.824 00:29:59.824 12:49:42 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:59.824 00:29:59.824 real 0m53.117s 00:29:59.824 user 0m43.666s 00:29:59.824 sys 0m7.874s 00:29:59.824 12:49:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.824 12:49:42 -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 ************************************ 00:29:59.824 END TEST spdk_dd_basic_rw 00:29:59.824 ************************************ 00:29:59.824 12:49:42 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:29:59.824 12:49:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:59.824 12:49:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.824 12:49:42 -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 ************************************ 00:29:59.824 START TEST spdk_dd_posix 00:29:59.824 ************************************ 00:29:59.824 12:49:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:29:59.824 * Looking for test storage... 00:29:59.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:59.824 12:49:42 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:59.824 12:49:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.824 12:49:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.824 12:49:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.824 12:49:42 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:59.824 12:49:42 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:59.824 12:49:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:59.824 12:49:42 -- paths/export.sh@5 -- # export PATH 00:29:59.824 12:49:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:59.824 12:49:42 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:29:59.824 12:49:42 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:29:59.824 12:49:42 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:29:59.824 12:49:42 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:29:59.824 12:49:42 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:59.824 12:49:42 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:59.824 12:49:42 -- dd/posix.sh@130 -- # tests 00:29:59.824 12:49:42 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:29:59.824 * First test run, using AIO 00:29:59.824 12:49:42 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:29:59.824 12:49:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:59.824 12:49:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.824 12:49:42 -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 ************************************ 00:29:59.824 START TEST dd_flag_append 00:29:59.824 ************************************ 00:29:59.824 12:49:42 -- common/autotest_common.sh@1104 -- # append 00:29:59.824 12:49:42 -- dd/posix.sh@16 -- # local dump0 00:29:59.824 12:49:42 -- dd/posix.sh@17 -- # local dump1 00:29:59.824 12:49:42 -- dd/posix.sh@19 -- # gen_bytes 32 00:29:59.824 12:49:42 -- dd/common.sh@98 -- # xtrace_disable 00:29:59.824 12:49:42 -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 12:49:42 -- dd/posix.sh@19 -- # dump0=1907tj6xpmig194jrurp2u6p7f4cnogu 00:29:59.824 12:49:42 -- dd/posix.sh@20 -- # gen_bytes 32 00:29:59.824 12:49:42 -- dd/common.sh@98 -- # xtrace_disable 00:29:59.824 12:49:42 -- common/autotest_common.sh@10 -- # set +x 00:29:59.824 12:49:42 -- dd/posix.sh@20 -- # dump1=aanz6n5y1637w5grvgoaxvhk2tb5jpoy 00:29:59.824 12:49:42 -- dd/posix.sh@22 -- # printf %s 1907tj6xpmig194jrurp2u6p7f4cnogu 00:29:59.824 12:49:42 -- dd/posix.sh@23 -- # printf %s aanz6n5y1637w5grvgoaxvhk2tb5jpoy 00:29:59.824 12:49:42 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:29:59.824 [2024-10-01 12:49:42.314397] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:59.824 [2024-10-01 12:49:42.314567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134489 ] 00:30:00.084 [2024-10-01 12:49:42.484938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.343 [2024-10-01 12:49:42.734231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.982  Copying: 32/32 [B] (average 31 kBps) 00:30:01.982 00:30:01.982 12:49:44 -- dd/posix.sh@27 -- # [[ aanz6n5y1637w5grvgoaxvhk2tb5jpoy1907tj6xpmig194jrurp2u6p7f4cnogu == \a\a\n\z\6\n\5\y\1\6\3\7\w\5\g\r\v\g\o\a\x\v\h\k\2\t\b\5\j\p\o\y\1\9\0\7\t\j\6\x\p\m\i\g\1\9\4\j\r\u\r\p\2\u\6\p\7\f\4\c\n\o\g\u ]] 00:30:01.982 00:30:01.982 real 0m2.259s 00:30:01.982 user 0m1.802s 00:30:01.982 sys 0m0.320s 00:30:01.982 12:49:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.982 12:49:44 -- common/autotest_common.sh@10 -- # set +x 00:30:01.982 ************************************ 00:30:01.982 END TEST dd_flag_append 00:30:01.982 ************************************ 00:30:02.241 12:49:44 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:30:02.241 12:49:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:02.241 12:49:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:02.241 12:49:44 -- common/autotest_common.sh@10 -- # set +x 00:30:02.241 ************************************ 00:30:02.241 START TEST dd_flag_directory 00:30:02.241 ************************************ 00:30:02.241 12:49:44 -- common/autotest_common.sh@1104 -- # directory 00:30:02.241 12:49:44 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:02.241 12:49:44 -- common/autotest_common.sh@640 -- # local es=0 00:30:02.241 12:49:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:02.241 12:49:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.241 12:49:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:02.241 12:49:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.241 12:49:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:02.241 12:49:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.241 12:49:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:02.241 12:49:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.241 12:49:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:02.241 12:49:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:02.241 [2024-10-01 12:49:44.646267] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:02.241 [2024-10-01 12:49:44.646929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134542 ] 00:30:02.500 [2024-10-01 12:49:44.817905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.759 [2024-10-01 12:49:45.058090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.018 [2024-10-01 12:49:45.414448] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:03.018 [2024-10-01 12:49:45.414844] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:03.018 [2024-10-01 12:49:45.414909] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:03.956 [2024-10-01 12:49:46.281167] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:04.524 12:49:46 -- common/autotest_common.sh@643 -- # es=236 00:30:04.524 12:49:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:04.524 12:49:46 -- common/autotest_common.sh@652 -- # es=108 00:30:04.524 12:49:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:30:04.524 12:49:46 -- common/autotest_common.sh@660 -- # es=1 00:30:04.524 12:49:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:04.524 12:49:46 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:04.524 12:49:46 -- common/autotest_common.sh@640 -- # local es=0 00:30:04.524 12:49:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:04.524 12:49:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:04.524 12:49:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:04.524 12:49:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:04.524 12:49:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:04.524 12:49:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:04.524 12:49:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:04.524 12:49:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:04.524 12:49:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:04.524 12:49:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:04.524 [2024-10-01 12:49:46.832348] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:04.524 [2024-10-01 12:49:46.832509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134574 ] 00:30:04.524 [2024-10-01 12:49:47.003697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.783 [2024-10-01 12:49:47.251320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.352 [2024-10-01 12:49:47.595317] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:05.352 [2024-10-01 12:49:47.595700] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:05.352 [2024-10-01 12:49:47.595766] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:06.288 [2024-10-01 12:49:48.467134] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:06.548 12:49:48 -- common/autotest_common.sh@643 -- # es=236 00:30:06.548 12:49:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:06.548 12:49:48 -- common/autotest_common.sh@652 -- # es=108 00:30:06.548 12:49:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:30:06.548 12:49:48 -- common/autotest_common.sh@660 -- # es=1 00:30:06.548 12:49:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:06.548 00:30:06.548 real 0m4.387s 00:30:06.548 user 0m3.528s 00:30:06.548 sys 0m0.656s 00:30:06.548 12:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:06.548 12:49:48 -- common/autotest_common.sh@10 -- # set +x 00:30:06.548 ************************************ 00:30:06.548 END TEST dd_flag_directory 00:30:06.548 ************************************ 00:30:06.548 12:49:49 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:30:06.548 12:49:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:06.548 12:49:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:06.548 12:49:49 -- common/autotest_common.sh@10 -- # set +x 00:30:06.548 ************************************ 00:30:06.548 START TEST dd_flag_nofollow 00:30:06.548 ************************************ 00:30:06.548 12:49:49 -- common/autotest_common.sh@1104 -- # nofollow 00:30:06.548 12:49:49 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:30:06.548 12:49:49 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:30:06.548 12:49:49 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:30:06.548 12:49:49 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:30:06.548 12:49:49 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:06.548 12:49:49 -- common/autotest_common.sh@640 -- # local es=0 00:30:06.548 12:49:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:06.548 12:49:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:06.548 12:49:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:06.548 12:49:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:06.548 12:49:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:06.548 12:49:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:06.548 12:49:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:06.548 12:49:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:06.548 12:49:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:06.548 12:49:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:06.807 [2024-10-01 12:49:49.120502] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:06.807 [2024-10-01 12:49:49.120658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134622 ] 00:30:06.807 [2024-10-01 12:49:49.287506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.066 [2024-10-01 12:49:49.544242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.633 [2024-10-01 12:49:49.899932] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:30:07.633 [2024-10-01 12:49:49.900324] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:30:07.633 [2024-10-01 12:49:49.900401] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:08.571 [2024-10-01 12:49:50.770907] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:08.830 12:49:51 -- common/autotest_common.sh@643 -- # es=216 00:30:08.830 12:49:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:08.830 12:49:51 -- common/autotest_common.sh@652 -- # es=88 00:30:08.830 12:49:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:30:08.830 12:49:51 -- common/autotest_common.sh@660 -- # es=1 00:30:08.831 12:49:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:08.831 12:49:51 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:30:08.831 12:49:51 -- common/autotest_common.sh@640 -- # local es=0 00:30:08.831 12:49:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:30:08.831 12:49:51 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.831 12:49:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:08.831 12:49:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.831 12:49:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:08.831 12:49:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.831 12:49:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:08.831 12:49:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.831 12:49:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:08.831 12:49:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:30:08.831 [2024-10-01 12:49:51.345697] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:08.831 [2024-10-01 12:49:51.345869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134655 ] 00:30:09.102 [2024-10-01 12:49:51.520491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.412 [2024-10-01 12:49:51.764815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.670 [2024-10-01 12:49:52.120044] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:30:09.670 [2024-10-01 12:49:52.120404] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:30:09.670 [2024-10-01 12:49:52.120470] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:10.605 [2024-10-01 12:49:52.974759] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:11.172 12:49:53 -- common/autotest_common.sh@643 -- # es=216 00:30:11.172 12:49:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:11.172 12:49:53 -- common/autotest_common.sh@652 -- # es=88 00:30:11.172 12:49:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:30:11.172 12:49:53 -- common/autotest_common.sh@660 -- # es=1 00:30:11.172 12:49:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:11.172 12:49:53 -- dd/posix.sh@46 -- # gen_bytes 512 00:30:11.172 12:49:53 -- dd/common.sh@98 -- # xtrace_disable 00:30:11.172 12:49:53 -- common/autotest_common.sh@10 -- # set +x 00:30:11.172 12:49:53 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:11.172 [2024-10-01 12:49:53.546255] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:11.172 [2024-10-01 12:49:53.546427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134689 ] 00:30:11.430 [2024-10-01 12:49:53.716880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.430 [2024-10-01 12:49:53.940361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.375  Copying: 512/512 [B] (average 500 kBps) 00:30:13.375 00:30:13.375 12:49:55 -- dd/posix.sh@49 -- # [[ y6c72dt5v0mekw1w2g37d7qs5zdxaswtemryhw87qo64pq9x8lnkwfx4533z4uli13cywj54ciy9229fc2qmvnmsq3sx9z6viuwcnyepn8wl3b39oo52b6ix5qco48vfwc38z216jgwz2u85porctzsy9vmfwfr53oxlsfjzhg1bo6y6b0chc3siaawxooauv2ca3a1quod6r7uanenrpttz2msh66g9vsqqgrk6d1bdbjtv6m414wjav9d663wsogkdvvjvf78pb9vrs1cwid96xqjj0509ckjh5x6gyk8tk9xrlmrkhijjrtfl96eehi0es4yzrxlvqtulgemwon9qbv8k9ak9d71068sqfsb1kjnqweb6y8vez3r0iw1urkne8iqi58ko651n157qsf40vg1yzd8w2oqy5efn3cnk9jumgps67ob5jm3zq057go65nrgcbbsl13596sfu7pzsi8y50hedgjb8r6z3jx8hah6cug16t2ws6eyh9ms8 == \y\6\c\7\2\d\t\5\v\0\m\e\k\w\1\w\2\g\3\7\d\7\q\s\5\z\d\x\a\s\w\t\e\m\r\y\h\w\8\7\q\o\6\4\p\q\9\x\8\l\n\k\w\f\x\4\5\3\3\z\4\u\l\i\1\3\c\y\w\j\5\4\c\i\y\9\2\2\9\f\c\2\q\m\v\n\m\s\q\3\s\x\9\z\6\v\i\u\w\c\n\y\e\p\n\8\w\l\3\b\3\9\o\o\5\2\b\6\i\x\5\q\c\o\4\8\v\f\w\c\3\8\z\2\1\6\j\g\w\z\2\u\8\5\p\o\r\c\t\z\s\y\9\v\m\f\w\f\r\5\3\o\x\l\s\f\j\z\h\g\1\b\o\6\y\6\b\0\c\h\c\3\s\i\a\a\w\x\o\o\a\u\v\2\c\a\3\a\1\q\u\o\d\6\r\7\u\a\n\e\n\r\p\t\t\z\2\m\s\h\6\6\g\9\v\s\q\q\g\r\k\6\d\1\b\d\b\j\t\v\6\m\4\1\4\w\j\a\v\9\d\6\6\3\w\s\o\g\k\d\v\v\j\v\f\7\8\p\b\9\v\r\s\1\c\w\i\d\9\6\x\q\j\j\0\5\0\9\c\k\j\h\5\x\6\g\y\k\8\t\k\9\x\r\l\m\r\k\h\i\j\j\r\t\f\l\9\6\e\e\h\i\0\e\s\4\y\z\r\x\l\v\q\t\u\l\g\e\m\w\o\n\9\q\b\v\8\k\9\a\k\9\d\7\1\0\6\8\s\q\f\s\b\1\k\j\n\q\w\e\b\6\y\8\v\e\z\3\r\0\i\w\1\u\r\k\n\e\8\i\q\i\5\8\k\o\6\5\1\n\1\5\7\q\s\f\4\0\v\g\1\y\z\d\8\w\2\o\q\y\5\e\f\n\3\c\n\k\9\j\u\m\g\p\s\6\7\o\b\5\j\m\3\z\q\0\5\7\g\o\6\5\n\r\g\c\b\b\s\l\1\3\5\9\6\s\f\u\7\p\z\s\i\8\y\5\0\h\e\d\g\j\b\8\r\6\z\3\j\x\8\h\a\h\6\c\u\g\1\6\t\2\w\s\6\e\y\h\9\m\s\8 ]] 00:30:13.375 00:30:13.375 real 0m6.636s 00:30:13.375 user 0m5.387s 00:30:13.375 sys 0m0.917s 00:30:13.375 12:49:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.375 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:30:13.375 ************************************ 00:30:13.375 END TEST dd_flag_nofollow 00:30:13.375 ************************************ 00:30:13.375 12:49:55 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:30:13.375 12:49:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:13.375 12:49:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:13.375 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:30:13.375 ************************************ 00:30:13.375 START TEST dd_flag_noatime 00:30:13.375 ************************************ 00:30:13.375 12:49:55 -- common/autotest_common.sh@1104 -- # noatime 00:30:13.375 12:49:55 -- dd/posix.sh@53 -- # local atime_if 00:30:13.375 12:49:55 -- dd/posix.sh@54 -- # local atime_of 00:30:13.375 12:49:55 -- dd/posix.sh@58 -- # gen_bytes 512 00:30:13.375 12:49:55 -- dd/common.sh@98 -- # xtrace_disable 00:30:13.375 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:30:13.375 12:49:55 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:13.375 12:49:55 -- dd/posix.sh@60 -- # atime_if=1727786994 00:30:13.375 12:49:55 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:13.375 12:49:55 -- dd/posix.sh@61 -- # atime_of=1727786995 00:30:13.375 12:49:55 -- dd/posix.sh@66 -- # sleep 1 00:30:14.311 12:49:56 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:14.569 [2024-10-01 12:49:56.857916] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:14.569 [2024-10-01 12:49:56.858136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134758 ] 00:30:14.569 [2024-10-01 12:49:57.039912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.828 [2024-10-01 12:49:57.259774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.464  Copying: 512/512 [B] (average 500 kBps) 00:30:16.464 00:30:16.464 12:49:58 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:16.464 12:49:58 -- dd/posix.sh@69 -- # (( atime_if == 1727786994 )) 00:30:16.464 12:49:58 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:16.724 12:49:58 -- dd/posix.sh@70 -- # (( atime_of == 1727786995 )) 00:30:16.725 12:49:59 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:16.725 [2024-10-01 12:49:59.068227] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:16.725 [2024-10-01 12:49:59.068383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134790 ] 00:30:16.725 [2024-10-01 12:49:59.236133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.983 [2024-10-01 12:49:59.476345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.946  Copying: 512/512 [B] (average 500 kBps) 00:30:18.946 00:30:18.946 12:50:01 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:18.946 12:50:01 -- dd/posix.sh@73 -- # (( atime_if < 1727786999 )) 00:30:18.946 00:30:18.946 real 0m5.480s 00:30:18.946 user 0m3.623s 00:30:18.946 sys 0m0.588s 00:30:18.946 12:50:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.946 12:50:01 -- common/autotest_common.sh@10 -- # set +x 00:30:18.946 ************************************ 00:30:18.946 END TEST dd_flag_noatime 00:30:18.946 ************************************ 00:30:18.946 12:50:01 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:30:18.946 12:50:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:18.946 12:50:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:18.946 12:50:01 -- common/autotest_common.sh@10 -- # set +x 00:30:18.946 ************************************ 00:30:18.946 START TEST dd_flags_misc 00:30:18.946 ************************************ 00:30:18.946 12:50:01 -- common/autotest_common.sh@1104 -- # io 00:30:18.946 12:50:01 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:30:18.946 12:50:01 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:30:18.946 12:50:01 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:30:18.946 12:50:01 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:30:18.946 12:50:01 -- dd/posix.sh@86 -- # gen_bytes 512 00:30:18.946 12:50:01 -- dd/common.sh@98 -- # xtrace_disable 00:30:18.946 12:50:01 -- common/autotest_common.sh@10 -- # set +x 00:30:18.946 12:50:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:18.946 12:50:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:30:18.946 [2024-10-01 12:50:01.411079] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:18.946 [2024-10-01 12:50:01.411277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134834 ] 00:30:19.223 [2024-10-01 12:50:01.580339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.480 [2024-10-01 12:50:01.832232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.110  Copying: 512/512 [B] (average 500 kBps) 00:30:21.110 00:30:21.110 12:50:03 -- dd/posix.sh@93 -- # [[ xq38s73ejyc2l8xws24luxbunrbmood4halu3s7ve86on12rrc7eoyvwqnhbyaxz5ggb75mozcujr51jjzul44rsrdq8ymywcvcuqiz9dykp9wm15g8z4nqg8qji6kfn3ws0zppc5nfdvdhv93o621fwjvrw6er144qd0v227wqc8tedns0o2qk4ecrcq7fb9qu21g89uvwgd3en8c7zcc4nyr1qgmr1cep4cgn62drjjizpsjxulfsmlrmjjjvuyifgx9mwx8g9ck9s1f7qqqve6uciq5axhfailzsi95wkyesi3w82tj9d0ibaf12d0xku00c81zfniprtmvjonzilb7imn74j1rudkinnmxm1ic2v24ragm0ufq8qflg2o6z9kl31ixkl7894h9771s7u7pu7o8izvamu78vye3dga70umi9rv7xlrt2k0ospa39j9bmfgdmzk2ih7ax29jafvmgmn5kpave0bdvp703baahia6j8rck34fqweteq == \x\q\3\8\s\7\3\e\j\y\c\2\l\8\x\w\s\2\4\l\u\x\b\u\n\r\b\m\o\o\d\4\h\a\l\u\3\s\7\v\e\8\6\o\n\1\2\r\r\c\7\e\o\y\v\w\q\n\h\b\y\a\x\z\5\g\g\b\7\5\m\o\z\c\u\j\r\5\1\j\j\z\u\l\4\4\r\s\r\d\q\8\y\m\y\w\c\v\c\u\q\i\z\9\d\y\k\p\9\w\m\1\5\g\8\z\4\n\q\g\8\q\j\i\6\k\f\n\3\w\s\0\z\p\p\c\5\n\f\d\v\d\h\v\9\3\o\6\2\1\f\w\j\v\r\w\6\e\r\1\4\4\q\d\0\v\2\2\7\w\q\c\8\t\e\d\n\s\0\o\2\q\k\4\e\c\r\c\q\7\f\b\9\q\u\2\1\g\8\9\u\v\w\g\d\3\e\n\8\c\7\z\c\c\4\n\y\r\1\q\g\m\r\1\c\e\p\4\c\g\n\6\2\d\r\j\j\i\z\p\s\j\x\u\l\f\s\m\l\r\m\j\j\j\v\u\y\i\f\g\x\9\m\w\x\8\g\9\c\k\9\s\1\f\7\q\q\q\v\e\6\u\c\i\q\5\a\x\h\f\a\i\l\z\s\i\9\5\w\k\y\e\s\i\3\w\8\2\t\j\9\d\0\i\b\a\f\1\2\d\0\x\k\u\0\0\c\8\1\z\f\n\i\p\r\t\m\v\j\o\n\z\i\l\b\7\i\m\n\7\4\j\1\r\u\d\k\i\n\n\m\x\m\1\i\c\2\v\2\4\r\a\g\m\0\u\f\q\8\q\f\l\g\2\o\6\z\9\k\l\3\1\i\x\k\l\7\8\9\4\h\9\7\7\1\s\7\u\7\p\u\7\o\8\i\z\v\a\m\u\7\8\v\y\e\3\d\g\a\7\0\u\m\i\9\r\v\7\x\l\r\t\2\k\0\o\s\p\a\3\9\j\9\b\m\f\g\d\m\z\k\2\i\h\7\a\x\2\9\j\a\f\v\m\g\m\n\5\k\p\a\v\e\0\b\d\v\p\7\0\3\b\a\a\h\i\a\6\j\8\r\c\k\3\4\f\q\w\e\t\e\q ]] 00:30:21.110 12:50:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:21.110 12:50:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:30:21.378 [2024-10-01 12:50:03.678159] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:21.378 [2024-10-01 12:50:03.678722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134867 ] 00:30:21.378 [2024-10-01 12:50:03.848074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.637 [2024-10-01 12:50:04.096367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.666  Copying: 512/512 [B] (average 500 kBps) 00:30:23.666 00:30:23.666 12:50:05 -- dd/posix.sh@93 -- # [[ xq38s73ejyc2l8xws24luxbunrbmood4halu3s7ve86on12rrc7eoyvwqnhbyaxz5ggb75mozcujr51jjzul44rsrdq8ymywcvcuqiz9dykp9wm15g8z4nqg8qji6kfn3ws0zppc5nfdvdhv93o621fwjvrw6er144qd0v227wqc8tedns0o2qk4ecrcq7fb9qu21g89uvwgd3en8c7zcc4nyr1qgmr1cep4cgn62drjjizpsjxulfsmlrmjjjvuyifgx9mwx8g9ck9s1f7qqqve6uciq5axhfailzsi95wkyesi3w82tj9d0ibaf12d0xku00c81zfniprtmvjonzilb7imn74j1rudkinnmxm1ic2v24ragm0ufq8qflg2o6z9kl31ixkl7894h9771s7u7pu7o8izvamu78vye3dga70umi9rv7xlrt2k0ospa39j9bmfgdmzk2ih7ax29jafvmgmn5kpave0bdvp703baahia6j8rck34fqweteq == \x\q\3\8\s\7\3\e\j\y\c\2\l\8\x\w\s\2\4\l\u\x\b\u\n\r\b\m\o\o\d\4\h\a\l\u\3\s\7\v\e\8\6\o\n\1\2\r\r\c\7\e\o\y\v\w\q\n\h\b\y\a\x\z\5\g\g\b\7\5\m\o\z\c\u\j\r\5\1\j\j\z\u\l\4\4\r\s\r\d\q\8\y\m\y\w\c\v\c\u\q\i\z\9\d\y\k\p\9\w\m\1\5\g\8\z\4\n\q\g\8\q\j\i\6\k\f\n\3\w\s\0\z\p\p\c\5\n\f\d\v\d\h\v\9\3\o\6\2\1\f\w\j\v\r\w\6\e\r\1\4\4\q\d\0\v\2\2\7\w\q\c\8\t\e\d\n\s\0\o\2\q\k\4\e\c\r\c\q\7\f\b\9\q\u\2\1\g\8\9\u\v\w\g\d\3\e\n\8\c\7\z\c\c\4\n\y\r\1\q\g\m\r\1\c\e\p\4\c\g\n\6\2\d\r\j\j\i\z\p\s\j\x\u\l\f\s\m\l\r\m\j\j\j\v\u\y\i\f\g\x\9\m\w\x\8\g\9\c\k\9\s\1\f\7\q\q\q\v\e\6\u\c\i\q\5\a\x\h\f\a\i\l\z\s\i\9\5\w\k\y\e\s\i\3\w\8\2\t\j\9\d\0\i\b\a\f\1\2\d\0\x\k\u\0\0\c\8\1\z\f\n\i\p\r\t\m\v\j\o\n\z\i\l\b\7\i\m\n\7\4\j\1\r\u\d\k\i\n\n\m\x\m\1\i\c\2\v\2\4\r\a\g\m\0\u\f\q\8\q\f\l\g\2\o\6\z\9\k\l\3\1\i\x\k\l\7\8\9\4\h\9\7\7\1\s\7\u\7\p\u\7\o\8\i\z\v\a\m\u\7\8\v\y\e\3\d\g\a\7\0\u\m\i\9\r\v\7\x\l\r\t\2\k\0\o\s\p\a\3\9\j\9\b\m\f\g\d\m\z\k\2\i\h\7\a\x\2\9\j\a\f\v\m\g\m\n\5\k\p\a\v\e\0\b\d\v\p\7\0\3\b\a\a\h\i\a\6\j\8\r\c\k\3\4\f\q\w\e\t\e\q ]] 00:30:23.666 12:50:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:23.666 12:50:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:30:23.666 [2024-10-01 12:50:05.922522] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:23.666 [2024-10-01 12:50:05.922698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134903 ] 00:30:23.666 [2024-10-01 12:50:06.091964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.924 [2024-10-01 12:50:06.341411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.086  Copying: 512/512 [B] (average 166 kBps) 00:30:26.086 00:30:26.087 12:50:08 -- dd/posix.sh@93 -- # [[ xq38s73ejyc2l8xws24luxbunrbmood4halu3s7ve86on12rrc7eoyvwqnhbyaxz5ggb75mozcujr51jjzul44rsrdq8ymywcvcuqiz9dykp9wm15g8z4nqg8qji6kfn3ws0zppc5nfdvdhv93o621fwjvrw6er144qd0v227wqc8tedns0o2qk4ecrcq7fb9qu21g89uvwgd3en8c7zcc4nyr1qgmr1cep4cgn62drjjizpsjxulfsmlrmjjjvuyifgx9mwx8g9ck9s1f7qqqve6uciq5axhfailzsi95wkyesi3w82tj9d0ibaf12d0xku00c81zfniprtmvjonzilb7imn74j1rudkinnmxm1ic2v24ragm0ufq8qflg2o6z9kl31ixkl7894h9771s7u7pu7o8izvamu78vye3dga70umi9rv7xlrt2k0ospa39j9bmfgdmzk2ih7ax29jafvmgmn5kpave0bdvp703baahia6j8rck34fqweteq == \x\q\3\8\s\7\3\e\j\y\c\2\l\8\x\w\s\2\4\l\u\x\b\u\n\r\b\m\o\o\d\4\h\a\l\u\3\s\7\v\e\8\6\o\n\1\2\r\r\c\7\e\o\y\v\w\q\n\h\b\y\a\x\z\5\g\g\b\7\5\m\o\z\c\u\j\r\5\1\j\j\z\u\l\4\4\r\s\r\d\q\8\y\m\y\w\c\v\c\u\q\i\z\9\d\y\k\p\9\w\m\1\5\g\8\z\4\n\q\g\8\q\j\i\6\k\f\n\3\w\s\0\z\p\p\c\5\n\f\d\v\d\h\v\9\3\o\6\2\1\f\w\j\v\r\w\6\e\r\1\4\4\q\d\0\v\2\2\7\w\q\c\8\t\e\d\n\s\0\o\2\q\k\4\e\c\r\c\q\7\f\b\9\q\u\2\1\g\8\9\u\v\w\g\d\3\e\n\8\c\7\z\c\c\4\n\y\r\1\q\g\m\r\1\c\e\p\4\c\g\n\6\2\d\r\j\j\i\z\p\s\j\x\u\l\f\s\m\l\r\m\j\j\j\v\u\y\i\f\g\x\9\m\w\x\8\g\9\c\k\9\s\1\f\7\q\q\q\v\e\6\u\c\i\q\5\a\x\h\f\a\i\l\z\s\i\9\5\w\k\y\e\s\i\3\w\8\2\t\j\9\d\0\i\b\a\f\1\2\d\0\x\k\u\0\0\c\8\1\z\f\n\i\p\r\t\m\v\j\o\n\z\i\l\b\7\i\m\n\7\4\j\1\r\u\d\k\i\n\n\m\x\m\1\i\c\2\v\2\4\r\a\g\m\0\u\f\q\8\q\f\l\g\2\o\6\z\9\k\l\3\1\i\x\k\l\7\8\9\4\h\9\7\7\1\s\7\u\7\p\u\7\o\8\i\z\v\a\m\u\7\8\v\y\e\3\d\g\a\7\0\u\m\i\9\r\v\7\x\l\r\t\2\k\0\o\s\p\a\3\9\j\9\b\m\f\g\d\m\z\k\2\i\h\7\a\x\2\9\j\a\f\v\m\g\m\n\5\k\p\a\v\e\0\b\d\v\p\7\0\3\b\a\a\h\i\a\6\j\8\r\c\k\3\4\f\q\w\e\t\e\q ]] 00:30:26.087 12:50:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:26.087 12:50:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:30:26.087 [2024-10-01 12:50:08.204478] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:26.087 [2024-10-01 12:50:08.204712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134932 ] 00:30:26.087 [2024-10-01 12:50:08.392224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.345 [2024-10-01 12:50:08.646392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.979  Copying: 512/512 [B] (average 250 kBps) 00:30:27.979 00:30:27.979 12:50:10 -- dd/posix.sh@93 -- # [[ xq38s73ejyc2l8xws24luxbunrbmood4halu3s7ve86on12rrc7eoyvwqnhbyaxz5ggb75mozcujr51jjzul44rsrdq8ymywcvcuqiz9dykp9wm15g8z4nqg8qji6kfn3ws0zppc5nfdvdhv93o621fwjvrw6er144qd0v227wqc8tedns0o2qk4ecrcq7fb9qu21g89uvwgd3en8c7zcc4nyr1qgmr1cep4cgn62drjjizpsjxulfsmlrmjjjvuyifgx9mwx8g9ck9s1f7qqqve6uciq5axhfailzsi95wkyesi3w82tj9d0ibaf12d0xku00c81zfniprtmvjonzilb7imn74j1rudkinnmxm1ic2v24ragm0ufq8qflg2o6z9kl31ixkl7894h9771s7u7pu7o8izvamu78vye3dga70umi9rv7xlrt2k0ospa39j9bmfgdmzk2ih7ax29jafvmgmn5kpave0bdvp703baahia6j8rck34fqweteq == \x\q\3\8\s\7\3\e\j\y\c\2\l\8\x\w\s\2\4\l\u\x\b\u\n\r\b\m\o\o\d\4\h\a\l\u\3\s\7\v\e\8\6\o\n\1\2\r\r\c\7\e\o\y\v\w\q\n\h\b\y\a\x\z\5\g\g\b\7\5\m\o\z\c\u\j\r\5\1\j\j\z\u\l\4\4\r\s\r\d\q\8\y\m\y\w\c\v\c\u\q\i\z\9\d\y\k\p\9\w\m\1\5\g\8\z\4\n\q\g\8\q\j\i\6\k\f\n\3\w\s\0\z\p\p\c\5\n\f\d\v\d\h\v\9\3\o\6\2\1\f\w\j\v\r\w\6\e\r\1\4\4\q\d\0\v\2\2\7\w\q\c\8\t\e\d\n\s\0\o\2\q\k\4\e\c\r\c\q\7\f\b\9\q\u\2\1\g\8\9\u\v\w\g\d\3\e\n\8\c\7\z\c\c\4\n\y\r\1\q\g\m\r\1\c\e\p\4\c\g\n\6\2\d\r\j\j\i\z\p\s\j\x\u\l\f\s\m\l\r\m\j\j\j\v\u\y\i\f\g\x\9\m\w\x\8\g\9\c\k\9\s\1\f\7\q\q\q\v\e\6\u\c\i\q\5\a\x\h\f\a\i\l\z\s\i\9\5\w\k\y\e\s\i\3\w\8\2\t\j\9\d\0\i\b\a\f\1\2\d\0\x\k\u\0\0\c\8\1\z\f\n\i\p\r\t\m\v\j\o\n\z\i\l\b\7\i\m\n\7\4\j\1\r\u\d\k\i\n\n\m\x\m\1\i\c\2\v\2\4\r\a\g\m\0\u\f\q\8\q\f\l\g\2\o\6\z\9\k\l\3\1\i\x\k\l\7\8\9\4\h\9\7\7\1\s\7\u\7\p\u\7\o\8\i\z\v\a\m\u\7\8\v\y\e\3\d\g\a\7\0\u\m\i\9\r\v\7\x\l\r\t\2\k\0\o\s\p\a\3\9\j\9\b\m\f\g\d\m\z\k\2\i\h\7\a\x\2\9\j\a\f\v\m\g\m\n\5\k\p\a\v\e\0\b\d\v\p\7\0\3\b\a\a\h\i\a\6\j\8\r\c\k\3\4\f\q\w\e\t\e\q ]] 00:30:27.980 12:50:10 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:30:27.980 12:50:10 -- dd/posix.sh@86 -- # gen_bytes 512 00:30:27.980 12:50:10 -- dd/common.sh@98 -- # xtrace_disable 00:30:27.980 12:50:10 -- common/autotest_common.sh@10 -- # set +x 00:30:27.980 12:50:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:27.980 12:50:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:30:28.238 [2024-10-01 12:50:10.579994] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:28.238 [2024-10-01 12:50:10.580623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134964 ] 00:30:28.238 [2024-10-01 12:50:10.750624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.498 [2024-10-01 12:50:11.021597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.446  Copying: 512/512 [B] (average 500 kBps) 00:30:30.446 00:30:30.446 12:50:12 -- dd/posix.sh@93 -- # [[ 0y02hchyr44sln93jr5oniac0ywhqb2p9rmg7h78r32e96yckfbqz14ocigjjpjtuwxpfm0cyn4yg5kobhoipyncf2t77c1ucr2gs3yu9p7sviysgqbshmfppyfytds1vwggmkzyhnx3ti826av02s5h6wqc252jlmxy1fz6yo8ueusb87tbs16h4nj9bjzla1axqu8onqp5d76tc09usl19js4ze8j3jui8sa8jecp9mqnretp4a245qf7ywe3c7bxhh2kuq08i4wx4vb6hwgktpgvzng93o9qis1c60cg5w1swlovlhxonvumrfk3jxxvekownath84y36s4j3sdpy8ocrhmiljk0p011smvc0npuwjzwiiyua84c6ok1lw8y9134bwqpfv2je1iztxw2xli0neerd823c4cgwqg22p27tye55uah79xipu39xrdtmxrs6psvfmtz900tle4f3ts7kgxuhrpuw8jyrighra3nfhjc99on2dvg8pj14 == \0\y\0\2\h\c\h\y\r\4\4\s\l\n\9\3\j\r\5\o\n\i\a\c\0\y\w\h\q\b\2\p\9\r\m\g\7\h\7\8\r\3\2\e\9\6\y\c\k\f\b\q\z\1\4\o\c\i\g\j\j\p\j\t\u\w\x\p\f\m\0\c\y\n\4\y\g\5\k\o\b\h\o\i\p\y\n\c\f\2\t\7\7\c\1\u\c\r\2\g\s\3\y\u\9\p\7\s\v\i\y\s\g\q\b\s\h\m\f\p\p\y\f\y\t\d\s\1\v\w\g\g\m\k\z\y\h\n\x\3\t\i\8\2\6\a\v\0\2\s\5\h\6\w\q\c\2\5\2\j\l\m\x\y\1\f\z\6\y\o\8\u\e\u\s\b\8\7\t\b\s\1\6\h\4\n\j\9\b\j\z\l\a\1\a\x\q\u\8\o\n\q\p\5\d\7\6\t\c\0\9\u\s\l\1\9\j\s\4\z\e\8\j\3\j\u\i\8\s\a\8\j\e\c\p\9\m\q\n\r\e\t\p\4\a\2\4\5\q\f\7\y\w\e\3\c\7\b\x\h\h\2\k\u\q\0\8\i\4\w\x\4\v\b\6\h\w\g\k\t\p\g\v\z\n\g\9\3\o\9\q\i\s\1\c\6\0\c\g\5\w\1\s\w\l\o\v\l\h\x\o\n\v\u\m\r\f\k\3\j\x\x\v\e\k\o\w\n\a\t\h\8\4\y\3\6\s\4\j\3\s\d\p\y\8\o\c\r\h\m\i\l\j\k\0\p\0\1\1\s\m\v\c\0\n\p\u\w\j\z\w\i\i\y\u\a\8\4\c\6\o\k\1\l\w\8\y\9\1\3\4\b\w\q\p\f\v\2\j\e\1\i\z\t\x\w\2\x\l\i\0\n\e\e\r\d\8\2\3\c\4\c\g\w\q\g\2\2\p\2\7\t\y\e\5\5\u\a\h\7\9\x\i\p\u\3\9\x\r\d\t\m\x\r\s\6\p\s\v\f\m\t\z\9\0\0\t\l\e\4\f\3\t\s\7\k\g\x\u\h\r\p\u\w\8\j\y\r\i\g\h\r\a\3\n\f\h\j\c\9\9\o\n\2\d\v\g\8\p\j\1\4 ]] 00:30:30.446 12:50:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:30.446 12:50:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:30:30.446 [2024-10-01 12:50:12.861461] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:30.446 [2024-10-01 12:50:12.861622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134992 ] 00:30:30.705 [2024-10-01 12:50:13.032837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.967 [2024-10-01 12:50:13.285902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.602  Copying: 512/512 [B] (average 500 kBps) 00:30:32.602 00:30:32.602 12:50:15 -- dd/posix.sh@93 -- # [[ 0y02hchyr44sln93jr5oniac0ywhqb2p9rmg7h78r32e96yckfbqz14ocigjjpjtuwxpfm0cyn4yg5kobhoipyncf2t77c1ucr2gs3yu9p7sviysgqbshmfppyfytds1vwggmkzyhnx3ti826av02s5h6wqc252jlmxy1fz6yo8ueusb87tbs16h4nj9bjzla1axqu8onqp5d76tc09usl19js4ze8j3jui8sa8jecp9mqnretp4a245qf7ywe3c7bxhh2kuq08i4wx4vb6hwgktpgvzng93o9qis1c60cg5w1swlovlhxonvumrfk3jxxvekownath84y36s4j3sdpy8ocrhmiljk0p011smvc0npuwjzwiiyua84c6ok1lw8y9134bwqpfv2je1iztxw2xli0neerd823c4cgwqg22p27tye55uah79xipu39xrdtmxrs6psvfmtz900tle4f3ts7kgxuhrpuw8jyrighra3nfhjc99on2dvg8pj14 == \0\y\0\2\h\c\h\y\r\4\4\s\l\n\9\3\j\r\5\o\n\i\a\c\0\y\w\h\q\b\2\p\9\r\m\g\7\h\7\8\r\3\2\e\9\6\y\c\k\f\b\q\z\1\4\o\c\i\g\j\j\p\j\t\u\w\x\p\f\m\0\c\y\n\4\y\g\5\k\o\b\h\o\i\p\y\n\c\f\2\t\7\7\c\1\u\c\r\2\g\s\3\y\u\9\p\7\s\v\i\y\s\g\q\b\s\h\m\f\p\p\y\f\y\t\d\s\1\v\w\g\g\m\k\z\y\h\n\x\3\t\i\8\2\6\a\v\0\2\s\5\h\6\w\q\c\2\5\2\j\l\m\x\y\1\f\z\6\y\o\8\u\e\u\s\b\8\7\t\b\s\1\6\h\4\n\j\9\b\j\z\l\a\1\a\x\q\u\8\o\n\q\p\5\d\7\6\t\c\0\9\u\s\l\1\9\j\s\4\z\e\8\j\3\j\u\i\8\s\a\8\j\e\c\p\9\m\q\n\r\e\t\p\4\a\2\4\5\q\f\7\y\w\e\3\c\7\b\x\h\h\2\k\u\q\0\8\i\4\w\x\4\v\b\6\h\w\g\k\t\p\g\v\z\n\g\9\3\o\9\q\i\s\1\c\6\0\c\g\5\w\1\s\w\l\o\v\l\h\x\o\n\v\u\m\r\f\k\3\j\x\x\v\e\k\o\w\n\a\t\h\8\4\y\3\6\s\4\j\3\s\d\p\y\8\o\c\r\h\m\i\l\j\k\0\p\0\1\1\s\m\v\c\0\n\p\u\w\j\z\w\i\i\y\u\a\8\4\c\6\o\k\1\l\w\8\y\9\1\3\4\b\w\q\p\f\v\2\j\e\1\i\z\t\x\w\2\x\l\i\0\n\e\e\r\d\8\2\3\c\4\c\g\w\q\g\2\2\p\2\7\t\y\e\5\5\u\a\h\7\9\x\i\p\u\3\9\x\r\d\t\m\x\r\s\6\p\s\v\f\m\t\z\9\0\0\t\l\e\4\f\3\t\s\7\k\g\x\u\h\r\p\u\w\8\j\y\r\i\g\h\r\a\3\n\f\h\j\c\9\9\o\n\2\d\v\g\8\p\j\1\4 ]] 00:30:32.602 12:50:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:32.602 12:50:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:30:32.602 [2024-10-01 12:50:15.134043] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:32.602 [2024-10-01 12:50:15.134252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135024 ] 00:30:32.860 [2024-10-01 12:50:15.306776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.118 [2024-10-01 12:50:15.567459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.127  Copying: 512/512 [B] (average 166 kBps) 00:30:35.127 00:30:35.127 12:50:17 -- dd/posix.sh@93 -- # [[ 0y02hchyr44sln93jr5oniac0ywhqb2p9rmg7h78r32e96yckfbqz14ocigjjpjtuwxpfm0cyn4yg5kobhoipyncf2t77c1ucr2gs3yu9p7sviysgqbshmfppyfytds1vwggmkzyhnx3ti826av02s5h6wqc252jlmxy1fz6yo8ueusb87tbs16h4nj9bjzla1axqu8onqp5d76tc09usl19js4ze8j3jui8sa8jecp9mqnretp4a245qf7ywe3c7bxhh2kuq08i4wx4vb6hwgktpgvzng93o9qis1c60cg5w1swlovlhxonvumrfk3jxxvekownath84y36s4j3sdpy8ocrhmiljk0p011smvc0npuwjzwiiyua84c6ok1lw8y9134bwqpfv2je1iztxw2xli0neerd823c4cgwqg22p27tye55uah79xipu39xrdtmxrs6psvfmtz900tle4f3ts7kgxuhrpuw8jyrighra3nfhjc99on2dvg8pj14 == \0\y\0\2\h\c\h\y\r\4\4\s\l\n\9\3\j\r\5\o\n\i\a\c\0\y\w\h\q\b\2\p\9\r\m\g\7\h\7\8\r\3\2\e\9\6\y\c\k\f\b\q\z\1\4\o\c\i\g\j\j\p\j\t\u\w\x\p\f\m\0\c\y\n\4\y\g\5\k\o\b\h\o\i\p\y\n\c\f\2\t\7\7\c\1\u\c\r\2\g\s\3\y\u\9\p\7\s\v\i\y\s\g\q\b\s\h\m\f\p\p\y\f\y\t\d\s\1\v\w\g\g\m\k\z\y\h\n\x\3\t\i\8\2\6\a\v\0\2\s\5\h\6\w\q\c\2\5\2\j\l\m\x\y\1\f\z\6\y\o\8\u\e\u\s\b\8\7\t\b\s\1\6\h\4\n\j\9\b\j\z\l\a\1\a\x\q\u\8\o\n\q\p\5\d\7\6\t\c\0\9\u\s\l\1\9\j\s\4\z\e\8\j\3\j\u\i\8\s\a\8\j\e\c\p\9\m\q\n\r\e\t\p\4\a\2\4\5\q\f\7\y\w\e\3\c\7\b\x\h\h\2\k\u\q\0\8\i\4\w\x\4\v\b\6\h\w\g\k\t\p\g\v\z\n\g\9\3\o\9\q\i\s\1\c\6\0\c\g\5\w\1\s\w\l\o\v\l\h\x\o\n\v\u\m\r\f\k\3\j\x\x\v\e\k\o\w\n\a\t\h\8\4\y\3\6\s\4\j\3\s\d\p\y\8\o\c\r\h\m\i\l\j\k\0\p\0\1\1\s\m\v\c\0\n\p\u\w\j\z\w\i\i\y\u\a\8\4\c\6\o\k\1\l\w\8\y\9\1\3\4\b\w\q\p\f\v\2\j\e\1\i\z\t\x\w\2\x\l\i\0\n\e\e\r\d\8\2\3\c\4\c\g\w\q\g\2\2\p\2\7\t\y\e\5\5\u\a\h\7\9\x\i\p\u\3\9\x\r\d\t\m\x\r\s\6\p\s\v\f\m\t\z\9\0\0\t\l\e\4\f\3\t\s\7\k\g\x\u\h\r\p\u\w\8\j\y\r\i\g\h\r\a\3\n\f\h\j\c\9\9\o\n\2\d\v\g\8\p\j\1\4 ]] 00:30:35.127 12:50:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:35.127 12:50:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:30:35.127 [2024-10-01 12:50:17.519153] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:35.127 [2024-10-01 12:50:17.519596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135053 ] 00:30:35.386 [2024-10-01 12:50:17.691019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.645 [2024-10-01 12:50:17.952994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.280  Copying: 512/512 [B] (average 166 kBps) 00:30:37.280 00:30:37.539 ************************************ 00:30:37.539 END TEST dd_flags_misc 00:30:37.539 ************************************ 00:30:37.539 12:50:19 -- dd/posix.sh@93 -- # [[ 0y02hchyr44sln93jr5oniac0ywhqb2p9rmg7h78r32e96yckfbqz14ocigjjpjtuwxpfm0cyn4yg5kobhoipyncf2t77c1ucr2gs3yu9p7sviysgqbshmfppyfytds1vwggmkzyhnx3ti826av02s5h6wqc252jlmxy1fz6yo8ueusb87tbs16h4nj9bjzla1axqu8onqp5d76tc09usl19js4ze8j3jui8sa8jecp9mqnretp4a245qf7ywe3c7bxhh2kuq08i4wx4vb6hwgktpgvzng93o9qis1c60cg5w1swlovlhxonvumrfk3jxxvekownath84y36s4j3sdpy8ocrhmiljk0p011smvc0npuwjzwiiyua84c6ok1lw8y9134bwqpfv2je1iztxw2xli0neerd823c4cgwqg22p27tye55uah79xipu39xrdtmxrs6psvfmtz900tle4f3ts7kgxuhrpuw8jyrighra3nfhjc99on2dvg8pj14 == \0\y\0\2\h\c\h\y\r\4\4\s\l\n\9\3\j\r\5\o\n\i\a\c\0\y\w\h\q\b\2\p\9\r\m\g\7\h\7\8\r\3\2\e\9\6\y\c\k\f\b\q\z\1\4\o\c\i\g\j\j\p\j\t\u\w\x\p\f\m\0\c\y\n\4\y\g\5\k\o\b\h\o\i\p\y\n\c\f\2\t\7\7\c\1\u\c\r\2\g\s\3\y\u\9\p\7\s\v\i\y\s\g\q\b\s\h\m\f\p\p\y\f\y\t\d\s\1\v\w\g\g\m\k\z\y\h\n\x\3\t\i\8\2\6\a\v\0\2\s\5\h\6\w\q\c\2\5\2\j\l\m\x\y\1\f\z\6\y\o\8\u\e\u\s\b\8\7\t\b\s\1\6\h\4\n\j\9\b\j\z\l\a\1\a\x\q\u\8\o\n\q\p\5\d\7\6\t\c\0\9\u\s\l\1\9\j\s\4\z\e\8\j\3\j\u\i\8\s\a\8\j\e\c\p\9\m\q\n\r\e\t\p\4\a\2\4\5\q\f\7\y\w\e\3\c\7\b\x\h\h\2\k\u\q\0\8\i\4\w\x\4\v\b\6\h\w\g\k\t\p\g\v\z\n\g\9\3\o\9\q\i\s\1\c\6\0\c\g\5\w\1\s\w\l\o\v\l\h\x\o\n\v\u\m\r\f\k\3\j\x\x\v\e\k\o\w\n\a\t\h\8\4\y\3\6\s\4\j\3\s\d\p\y\8\o\c\r\h\m\i\l\j\k\0\p\0\1\1\s\m\v\c\0\n\p\u\w\j\z\w\i\i\y\u\a\8\4\c\6\o\k\1\l\w\8\y\9\1\3\4\b\w\q\p\f\v\2\j\e\1\i\z\t\x\w\2\x\l\i\0\n\e\e\r\d\8\2\3\c\4\c\g\w\q\g\2\2\p\2\7\t\y\e\5\5\u\a\h\7\9\x\i\p\u\3\9\x\r\d\t\m\x\r\s\6\p\s\v\f\m\t\z\9\0\0\t\l\e\4\f\3\t\s\7\k\g\x\u\h\r\p\u\w\8\j\y\r\i\g\h\r\a\3\n\f\h\j\c\9\9\o\n\2\d\v\g\8\p\j\1\4 ]] 00:30:37.539 00:30:37.539 real 0m18.502s 00:30:37.539 user 0m14.947s 00:30:37.539 sys 0m2.489s 00:30:37.539 12:50:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:37.539 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:37.539 12:50:19 -- dd/posix.sh@131 -- # tests_forced_aio 00:30:37.539 12:50:19 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:30:37.539 * Second test run, using AIO 00:30:37.539 12:50:19 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:30:37.539 12:50:19 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:30:37.539 12:50:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:37.539 12:50:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:37.539 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:37.539 ************************************ 00:30:37.539 START TEST dd_flag_append_forced_aio 00:30:37.539 ************************************ 00:30:37.539 12:50:19 -- common/autotest_common.sh@1104 -- # append 00:30:37.539 12:50:19 -- dd/posix.sh@16 -- # local dump0 00:30:37.539 12:50:19 -- dd/posix.sh@17 -- # local dump1 00:30:37.539 12:50:19 -- dd/posix.sh@19 -- # gen_bytes 32 00:30:37.539 12:50:19 -- dd/common.sh@98 -- # xtrace_disable 00:30:37.539 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:37.539 12:50:19 -- dd/posix.sh@19 -- # dump0=d200lashjxo1jh1n5zec7c91wnpm4qls 00:30:37.539 12:50:19 -- dd/posix.sh@20 -- # gen_bytes 32 00:30:37.539 12:50:19 -- dd/common.sh@98 -- # xtrace_disable 00:30:37.539 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:37.539 12:50:19 -- dd/posix.sh@20 -- # dump1=257rj33fgqfzkiq6pmjbqi2ryy4o30ny 00:30:37.539 12:50:19 -- dd/posix.sh@22 -- # printf %s d200lashjxo1jh1n5zec7c91wnpm4qls 00:30:37.539 12:50:19 -- dd/posix.sh@23 -- # printf %s 257rj33fgqfzkiq6pmjbqi2ryy4o30ny 00:30:37.539 12:50:19 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:30:37.539 [2024-10-01 12:50:20.013267] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:37.539 [2024-10-01 12:50:20.013989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135105 ] 00:30:37.798 [2024-10-01 12:50:20.192761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.058 [2024-10-01 12:50:20.453972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.751  Copying: 32/32 [B] (average 31 kBps) 00:30:39.751 00:30:40.009 12:50:22 -- dd/posix.sh@27 -- # [[ 257rj33fgqfzkiq6pmjbqi2ryy4o30nyd200lashjxo1jh1n5zec7c91wnpm4qls == \2\5\7\r\j\3\3\f\g\q\f\z\k\i\q\6\p\m\j\b\q\i\2\r\y\y\4\o\3\0\n\y\d\2\0\0\l\a\s\h\j\x\o\1\j\h\1\n\5\z\e\c\7\c\9\1\w\n\p\m\4\q\l\s ]] 00:30:40.009 00:30:40.009 real 0m2.389s 00:30:40.009 user 0m1.954s 00:30:40.009 sys 0m0.302s 00:30:40.009 12:50:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.009 ************************************ 00:30:40.009 END TEST dd_flag_append_forced_aio 00:30:40.009 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:30:40.009 ************************************ 00:30:40.009 12:50:22 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:30:40.009 12:50:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:40.009 12:50:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:40.009 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:30:40.009 ************************************ 00:30:40.009 START TEST dd_flag_directory_forced_aio 00:30:40.009 ************************************ 00:30:40.009 12:50:22 -- common/autotest_common.sh@1104 -- # directory 00:30:40.009 12:50:22 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:40.009 12:50:22 -- common/autotest_common.sh@640 -- # local es=0 00:30:40.009 12:50:22 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:40.009 12:50:22 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:40.009 12:50:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:40.009 12:50:22 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:40.009 12:50:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:40.010 12:50:22 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:40.010 12:50:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:40.010 12:50:22 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:40.010 12:50:22 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:40.010 12:50:22 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:40.010 [2024-10-01 12:50:22.451123] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:40.010 [2024-10-01 12:50:22.451318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135165 ] 00:30:40.268 [2024-10-01 12:50:22.625269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.526 [2024-10-01 12:50:22.889147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.784 [2024-10-01 12:50:23.279068] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:40.784 [2024-10-01 12:50:23.279461] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:40.784 [2024-10-01 12:50:23.279532] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:41.719 [2024-10-01 12:50:24.201402] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:42.287 12:50:24 -- common/autotest_common.sh@643 -- # es=236 00:30:42.287 12:50:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:42.287 12:50:24 -- common/autotest_common.sh@652 -- # es=108 00:30:42.287 12:50:24 -- common/autotest_common.sh@653 -- # case "$es" in 00:30:42.287 12:50:24 -- common/autotest_common.sh@660 -- # es=1 00:30:42.287 12:50:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:42.287 12:50:24 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:42.287 12:50:24 -- common/autotest_common.sh@640 -- # local es=0 00:30:42.287 12:50:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:42.287 12:50:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:42.287 12:50:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:42.287 12:50:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:42.287 12:50:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:42.287 12:50:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:42.287 12:50:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:42.287 12:50:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:42.287 12:50:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:42.287 12:50:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:42.287 [2024-10-01 12:50:24.767588] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:42.287 [2024-10-01 12:50:24.767762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135198 ] 00:30:42.549 [2024-10-01 12:50:24.939155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.864 [2024-10-01 12:50:25.200438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.124 [2024-10-01 12:50:25.561643] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:43.124 [2024-10-01 12:50:25.561921] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:43.124 [2024-10-01 12:50:25.561988] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:44.062 [2024-10-01 12:50:26.458947] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:44.631 12:50:26 -- common/autotest_common.sh@643 -- # es=236 00:30:44.631 12:50:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:44.631 12:50:26 -- common/autotest_common.sh@652 -- # es=108 00:30:44.631 12:50:26 -- common/autotest_common.sh@653 -- # case "$es" in 00:30:44.631 12:50:26 -- common/autotest_common.sh@660 -- # es=1 00:30:44.631 12:50:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:44.631 00:30:44.631 real 0m4.600s 00:30:44.631 user 0m3.774s 00:30:44.631 sys 0m0.624s 00:30:44.631 12:50:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.631 12:50:26 -- common/autotest_common.sh@10 -- # set +x 00:30:44.631 ************************************ 00:30:44.631 END TEST dd_flag_directory_forced_aio 00:30:44.631 ************************************ 00:30:44.631 12:50:27 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:30:44.631 12:50:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:44.631 12:50:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:44.631 12:50:27 -- common/autotest_common.sh@10 -- # set +x 00:30:44.631 ************************************ 00:30:44.631 START TEST dd_flag_nofollow_forced_aio 00:30:44.631 ************************************ 00:30:44.631 12:50:27 -- common/autotest_common.sh@1104 -- # nofollow 00:30:44.631 12:50:27 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:30:44.631 12:50:27 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:30:44.631 12:50:27 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:30:44.631 12:50:27 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:30:44.631 12:50:27 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:44.631 12:50:27 -- common/autotest_common.sh@640 -- # local es=0 00:30:44.631 12:50:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:44.631 12:50:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.631 12:50:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:44.631 12:50:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.631 12:50:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:44.631 12:50:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.631 12:50:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:44.631 12:50:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:44.631 12:50:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:44.631 12:50:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:44.631 [2024-10-01 12:50:27.135504] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:44.631 [2024-10-01 12:50:27.135935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135248 ] 00:30:44.890 [2024-10-01 12:50:27.305251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.149 [2024-10-01 12:50:27.575541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.717 [2024-10-01 12:50:27.969520] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:30:45.717 [2024-10-01 12:50:27.969917] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:30:45.717 [2024-10-01 12:50:27.969995] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:46.652 [2024-10-01 12:50:28.893454] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:46.910 12:50:29 -- common/autotest_common.sh@643 -- # es=216 00:30:46.910 12:50:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:46.910 12:50:29 -- common/autotest_common.sh@652 -- # es=88 00:30:46.910 12:50:29 -- common/autotest_common.sh@653 -- # case "$es" in 00:30:46.910 12:50:29 -- common/autotest_common.sh@660 -- # es=1 00:30:46.910 12:50:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:46.910 12:50:29 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:30:46.910 12:50:29 -- common/autotest_common.sh@640 -- # local es=0 00:30:46.910 12:50:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:30:46.910 12:50:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.910 12:50:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:46.910 12:50:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.910 12:50:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:46.910 12:50:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.910 12:50:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:46.910 12:50:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.910 12:50:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:46.910 12:50:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:30:47.168 [2024-10-01 12:50:29.468945] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:47.168 [2024-10-01 12:50:29.469386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135287 ] 00:30:47.168 [2024-10-01 12:50:29.644481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.426 [2024-10-01 12:50:29.917842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.031 [2024-10-01 12:50:30.289157] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:30:48.031 [2024-10-01 12:50:30.289566] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:30:48.031 [2024-10-01 12:50:30.289639] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:48.966 [2024-10-01 12:50:31.226522] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:49.225 12:50:31 -- common/autotest_common.sh@643 -- # es=216 00:30:49.225 12:50:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:49.225 12:50:31 -- common/autotest_common.sh@652 -- # es=88 00:30:49.225 12:50:31 -- common/autotest_common.sh@653 -- # case "$es" in 00:30:49.225 12:50:31 -- common/autotest_common.sh@660 -- # es=1 00:30:49.225 12:50:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:49.225 12:50:31 -- dd/posix.sh@46 -- # gen_bytes 512 00:30:49.225 12:50:31 -- dd/common.sh@98 -- # xtrace_disable 00:30:49.225 12:50:31 -- common/autotest_common.sh@10 -- # set +x 00:30:49.485 12:50:31 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:49.485 [2024-10-01 12:50:31.841879] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:49.485 [2024-10-01 12:50:31.842087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135316 ] 00:30:49.485 [2024-10-01 12:50:32.018589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.053 [2024-10-01 12:50:32.292298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.283  Copying: 512/512 [B] (average 500 kBps) 00:30:52.283 00:30:52.284 12:50:34 -- dd/posix.sh@49 -- # [[ dap492rw4b31pomqefpn69n3xygmodg89enwj8rg71m3e9eydk3v4u1ms5h6edptb62v93ns7m00vs2wd316shtfml4ebywzm0zo92f3yg2o05wucprhxlfno6xgdsbvntlp4z5c19wcks4jthyvp55s9110plz1iummsbrfmazpkjfaz8jh6ngtcqqaxvigoit38lzpp026ru4lx1qivzlhnhderhy3cmukzb0noog8ijl9skya5k452fqu73ka5lm6vyf9az6ak3ojbpn84ir9oybrc3kuspmr19t3e7gn7wlc4bhr2meph2wiqa65cufilqmrme33wyhshoeojqztsd0up16itgq48rr62lzgcug0sccxtckj5nd3o3484pouaowu1s5yspnduftv3u59vvafglce75214h7gv4i8rwv5etfycrbwusn8nx27jvoqrwe7l56tum4zk5zlttibu7fzht6q2vjgrqziefxgq5xsa1wtqgp2pni1p7aq == \d\a\p\4\9\2\r\w\4\b\3\1\p\o\m\q\e\f\p\n\6\9\n\3\x\y\g\m\o\d\g\8\9\e\n\w\j\8\r\g\7\1\m\3\e\9\e\y\d\k\3\v\4\u\1\m\s\5\h\6\e\d\p\t\b\6\2\v\9\3\n\s\7\m\0\0\v\s\2\w\d\3\1\6\s\h\t\f\m\l\4\e\b\y\w\z\m\0\z\o\9\2\f\3\y\g\2\o\0\5\w\u\c\p\r\h\x\l\f\n\o\6\x\g\d\s\b\v\n\t\l\p\4\z\5\c\1\9\w\c\k\s\4\j\t\h\y\v\p\5\5\s\9\1\1\0\p\l\z\1\i\u\m\m\s\b\r\f\m\a\z\p\k\j\f\a\z\8\j\h\6\n\g\t\c\q\q\a\x\v\i\g\o\i\t\3\8\l\z\p\p\0\2\6\r\u\4\l\x\1\q\i\v\z\l\h\n\h\d\e\r\h\y\3\c\m\u\k\z\b\0\n\o\o\g\8\i\j\l\9\s\k\y\a\5\k\4\5\2\f\q\u\7\3\k\a\5\l\m\6\v\y\f\9\a\z\6\a\k\3\o\j\b\p\n\8\4\i\r\9\o\y\b\r\c\3\k\u\s\p\m\r\1\9\t\3\e\7\g\n\7\w\l\c\4\b\h\r\2\m\e\p\h\2\w\i\q\a\6\5\c\u\f\i\l\q\m\r\m\e\3\3\w\y\h\s\h\o\e\o\j\q\z\t\s\d\0\u\p\1\6\i\t\g\q\4\8\r\r\6\2\l\z\g\c\u\g\0\s\c\c\x\t\c\k\j\5\n\d\3\o\3\4\8\4\p\o\u\a\o\w\u\1\s\5\y\s\p\n\d\u\f\t\v\3\u\5\9\v\v\a\f\g\l\c\e\7\5\2\1\4\h\7\g\v\4\i\8\r\w\v\5\e\t\f\y\c\r\b\w\u\s\n\8\n\x\2\7\j\v\o\q\r\w\e\7\l\5\6\t\u\m\4\z\k\5\z\l\t\t\i\b\u\7\f\z\h\t\6\q\2\v\j\g\r\q\z\i\e\f\x\g\q\5\x\s\a\1\w\t\q\g\p\2\p\n\i\1\p\7\a\q ]] 00:30:52.284 00:30:52.284 real 0m7.223s 00:30:52.284 user 0m5.905s 00:30:52.284 sys 0m0.987s 00:30:52.284 ************************************ 00:30:52.284 END TEST dd_flag_nofollow_forced_aio 00:30:52.284 ************************************ 00:30:52.284 12:50:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:52.284 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:30:52.284 12:50:34 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:30:52.284 12:50:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:52.284 12:50:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:52.284 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:30:52.284 ************************************ 00:30:52.284 START TEST dd_flag_noatime_forced_aio 00:30:52.284 ************************************ 00:30:52.284 12:50:34 -- common/autotest_common.sh@1104 -- # noatime 00:30:52.284 12:50:34 -- dd/posix.sh@53 -- # local atime_if 00:30:52.284 12:50:34 -- dd/posix.sh@54 -- # local atime_of 00:30:52.284 12:50:34 -- dd/posix.sh@58 -- # gen_bytes 512 00:30:52.284 12:50:34 -- dd/common.sh@98 -- # xtrace_disable 00:30:52.284 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:30:52.284 12:50:34 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:52.284 12:50:34 -- dd/posix.sh@60 -- # atime_if=1727787032 00:30:52.284 12:50:34 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:52.284 12:50:34 -- dd/posix.sh@61 -- # atime_of=1727787034 00:30:52.284 12:50:34 -- dd/posix.sh@66 -- # sleep 1 00:30:52.853 12:50:35 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:53.112 [2024-10-01 12:50:35.451309] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:53.112 [2024-10-01 12:50:35.451491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135381 ] 00:30:53.112 [2024-10-01 12:50:35.625001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.370 [2024-10-01 12:50:35.898391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.314  Copying: 512/512 [B] (average 500 kBps) 00:30:55.314 00:30:55.572 12:50:37 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:55.572 12:50:37 -- dd/posix.sh@69 -- # (( atime_if == 1727787032 )) 00:30:55.572 12:50:37 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:55.572 12:50:37 -- dd/posix.sh@70 -- # (( atime_of == 1727787034 )) 00:30:55.572 12:50:37 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:55.572 [2024-10-01 12:50:37.966069] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:55.572 [2024-10-01 12:50:37.966265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135419 ] 00:30:55.829 [2024-10-01 12:50:38.142500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.085 [2024-10-01 12:50:38.448460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.018  Copying: 512/512 [B] (average 500 kBps) 00:30:58.018 00:30:58.018 12:50:40 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:58.018 12:50:40 -- dd/posix.sh@73 -- # (( atime_if < 1727787038 )) 00:30:58.018 00:30:58.018 real 0m6.146s 00:30:58.018 user 0m4.105s 00:30:58.018 sys 0m0.769s 00:30:58.018 12:50:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:58.018 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:30:58.018 ************************************ 00:30:58.018 END TEST dd_flag_noatime_forced_aio 00:30:58.018 ************************************ 00:30:58.276 12:50:40 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:30:58.276 12:50:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:58.276 12:50:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:58.276 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:30:58.276 ************************************ 00:30:58.276 START TEST dd_flags_misc_forced_aio 00:30:58.276 ************************************ 00:30:58.276 12:50:40 -- common/autotest_common.sh@1104 -- # io 00:30:58.276 12:50:40 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:30:58.276 12:50:40 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:30:58.276 12:50:40 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:30:58.276 12:50:40 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:30:58.276 12:50:40 -- dd/posix.sh@86 -- # gen_bytes 512 00:30:58.276 12:50:40 -- dd/common.sh@98 -- # xtrace_disable 00:30:58.276 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:30:58.276 12:50:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:30:58.276 12:50:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:30:58.276 [2024-10-01 12:50:40.657092] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:58.276 [2024-10-01 12:50:40.657832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135469 ] 00:30:58.535 [2024-10-01 12:50:40.830939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.793 [2024-10-01 12:50:41.122422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.951  Copying: 512/512 [B] (average 500 kBps) 00:31:00.951 00:31:00.951 12:50:43 -- dd/posix.sh@93 -- # [[ 77t43i1k2yc8txkhl1wj5r5vvsy6xreusdrt672qfr6tmpbc0petsv56923hbst9mk8tqnt15344rv0y4jwerxv1beh5fpwz14iz8lbexjzhdh68olmwuelm4salo7ffhrrple3zyyrln0169p4ubf3049ghpk0i6h8dmvebsgzrldqu9mg0qz14gvrpvbqvdrf6pouaql4377n5xm3y3772mx52idv3r6f8ywphkkeu7tub8ezc4duccm537pumnkzrrsa9hcafl963axphiucm58smzrsjidp0pi5b47tfpl0ts4dwlamrgq4v51vobrcszwobps7xytux0ftrxliogfc87s7k7xx23jc1zfybkmiel5zv27iu2b5vobkhkgmcdqdkahlfdhlncvuvjjyw1s4pk2sr0wdd965bwex70ywc0zi0eg467k9a01yt0gdh3y7y59mvlblild2z62wmqtd4t3ug1teql16zmq0in7io9pk9mbspq0okpyda == \7\7\t\4\3\i\1\k\2\y\c\8\t\x\k\h\l\1\w\j\5\r\5\v\v\s\y\6\x\r\e\u\s\d\r\t\6\7\2\q\f\r\6\t\m\p\b\c\0\p\e\t\s\v\5\6\9\2\3\h\b\s\t\9\m\k\8\t\q\n\t\1\5\3\4\4\r\v\0\y\4\j\w\e\r\x\v\1\b\e\h\5\f\p\w\z\1\4\i\z\8\l\b\e\x\j\z\h\d\h\6\8\o\l\m\w\u\e\l\m\4\s\a\l\o\7\f\f\h\r\r\p\l\e\3\z\y\y\r\l\n\0\1\6\9\p\4\u\b\f\3\0\4\9\g\h\p\k\0\i\6\h\8\d\m\v\e\b\s\g\z\r\l\d\q\u\9\m\g\0\q\z\1\4\g\v\r\p\v\b\q\v\d\r\f\6\p\o\u\a\q\l\4\3\7\7\n\5\x\m\3\y\3\7\7\2\m\x\5\2\i\d\v\3\r\6\f\8\y\w\p\h\k\k\e\u\7\t\u\b\8\e\z\c\4\d\u\c\c\m\5\3\7\p\u\m\n\k\z\r\r\s\a\9\h\c\a\f\l\9\6\3\a\x\p\h\i\u\c\m\5\8\s\m\z\r\s\j\i\d\p\0\p\i\5\b\4\7\t\f\p\l\0\t\s\4\d\w\l\a\m\r\g\q\4\v\5\1\v\o\b\r\c\s\z\w\o\b\p\s\7\x\y\t\u\x\0\f\t\r\x\l\i\o\g\f\c\8\7\s\7\k\7\x\x\2\3\j\c\1\z\f\y\b\k\m\i\e\l\5\z\v\2\7\i\u\2\b\5\v\o\b\k\h\k\g\m\c\d\q\d\k\a\h\l\f\d\h\l\n\c\v\u\v\j\j\y\w\1\s\4\p\k\2\s\r\0\w\d\d\9\6\5\b\w\e\x\7\0\y\w\c\0\z\i\0\e\g\4\6\7\k\9\a\0\1\y\t\0\g\d\h\3\y\7\y\5\9\m\v\l\b\l\i\l\d\2\z\6\2\w\m\q\t\d\4\t\3\u\g\1\t\e\q\l\1\6\z\m\q\0\i\n\7\i\o\9\p\k\9\m\b\s\p\q\0\o\k\p\y\d\a ]] 00:31:00.951 12:50:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:00.951 12:50:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:00.951 [2024-10-01 12:50:43.270316] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:00.951 [2024-10-01 12:50:43.270508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135509 ] 00:31:00.951 [2024-10-01 12:50:43.447458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.209 [2024-10-01 12:50:43.729718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.156  Copying: 512/512 [B] (average 500 kBps) 00:31:03.156 00:31:03.415 12:50:45 -- dd/posix.sh@93 -- # [[ 77t43i1k2yc8txkhl1wj5r5vvsy6xreusdrt672qfr6tmpbc0petsv56923hbst9mk8tqnt15344rv0y4jwerxv1beh5fpwz14iz8lbexjzhdh68olmwuelm4salo7ffhrrple3zyyrln0169p4ubf3049ghpk0i6h8dmvebsgzrldqu9mg0qz14gvrpvbqvdrf6pouaql4377n5xm3y3772mx52idv3r6f8ywphkkeu7tub8ezc4duccm537pumnkzrrsa9hcafl963axphiucm58smzrsjidp0pi5b47tfpl0ts4dwlamrgq4v51vobrcszwobps7xytux0ftrxliogfc87s7k7xx23jc1zfybkmiel5zv27iu2b5vobkhkgmcdqdkahlfdhlncvuvjjyw1s4pk2sr0wdd965bwex70ywc0zi0eg467k9a01yt0gdh3y7y59mvlblild2z62wmqtd4t3ug1teql16zmq0in7io9pk9mbspq0okpyda == \7\7\t\4\3\i\1\k\2\y\c\8\t\x\k\h\l\1\w\j\5\r\5\v\v\s\y\6\x\r\e\u\s\d\r\t\6\7\2\q\f\r\6\t\m\p\b\c\0\p\e\t\s\v\5\6\9\2\3\h\b\s\t\9\m\k\8\t\q\n\t\1\5\3\4\4\r\v\0\y\4\j\w\e\r\x\v\1\b\e\h\5\f\p\w\z\1\4\i\z\8\l\b\e\x\j\z\h\d\h\6\8\o\l\m\w\u\e\l\m\4\s\a\l\o\7\f\f\h\r\r\p\l\e\3\z\y\y\r\l\n\0\1\6\9\p\4\u\b\f\3\0\4\9\g\h\p\k\0\i\6\h\8\d\m\v\e\b\s\g\z\r\l\d\q\u\9\m\g\0\q\z\1\4\g\v\r\p\v\b\q\v\d\r\f\6\p\o\u\a\q\l\4\3\7\7\n\5\x\m\3\y\3\7\7\2\m\x\5\2\i\d\v\3\r\6\f\8\y\w\p\h\k\k\e\u\7\t\u\b\8\e\z\c\4\d\u\c\c\m\5\3\7\p\u\m\n\k\z\r\r\s\a\9\h\c\a\f\l\9\6\3\a\x\p\h\i\u\c\m\5\8\s\m\z\r\s\j\i\d\p\0\p\i\5\b\4\7\t\f\p\l\0\t\s\4\d\w\l\a\m\r\g\q\4\v\5\1\v\o\b\r\c\s\z\w\o\b\p\s\7\x\y\t\u\x\0\f\t\r\x\l\i\o\g\f\c\8\7\s\7\k\7\x\x\2\3\j\c\1\z\f\y\b\k\m\i\e\l\5\z\v\2\7\i\u\2\b\5\v\o\b\k\h\k\g\m\c\d\q\d\k\a\h\l\f\d\h\l\n\c\v\u\v\j\j\y\w\1\s\4\p\k\2\s\r\0\w\d\d\9\6\5\b\w\e\x\7\0\y\w\c\0\z\i\0\e\g\4\6\7\k\9\a\0\1\y\t\0\g\d\h\3\y\7\y\5\9\m\v\l\b\l\i\l\d\2\z\6\2\w\m\q\t\d\4\t\3\u\g\1\t\e\q\l\1\6\z\m\q\0\i\n\7\i\o\9\p\k\9\m\b\s\p\q\0\o\k\p\y\d\a ]] 00:31:03.415 12:50:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:03.415 12:50:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:03.415 [2024-10-01 12:50:45.795108] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:03.415 [2024-10-01 12:50:45.795311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135538 ] 00:31:03.757 [2024-10-01 12:50:45.972612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.757 [2024-10-01 12:50:46.243694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.700  Copying: 512/512 [B] (average 100 kBps) 00:31:05.700 00:31:05.700 12:50:48 -- dd/posix.sh@93 -- # [[ 77t43i1k2yc8txkhl1wj5r5vvsy6xreusdrt672qfr6tmpbc0petsv56923hbst9mk8tqnt15344rv0y4jwerxv1beh5fpwz14iz8lbexjzhdh68olmwuelm4salo7ffhrrple3zyyrln0169p4ubf3049ghpk0i6h8dmvebsgzrldqu9mg0qz14gvrpvbqvdrf6pouaql4377n5xm3y3772mx52idv3r6f8ywphkkeu7tub8ezc4duccm537pumnkzrrsa9hcafl963axphiucm58smzrsjidp0pi5b47tfpl0ts4dwlamrgq4v51vobrcszwobps7xytux0ftrxliogfc87s7k7xx23jc1zfybkmiel5zv27iu2b5vobkhkgmcdqdkahlfdhlncvuvjjyw1s4pk2sr0wdd965bwex70ywc0zi0eg467k9a01yt0gdh3y7y59mvlblild2z62wmqtd4t3ug1teql16zmq0in7io9pk9mbspq0okpyda == \7\7\t\4\3\i\1\k\2\y\c\8\t\x\k\h\l\1\w\j\5\r\5\v\v\s\y\6\x\r\e\u\s\d\r\t\6\7\2\q\f\r\6\t\m\p\b\c\0\p\e\t\s\v\5\6\9\2\3\h\b\s\t\9\m\k\8\t\q\n\t\1\5\3\4\4\r\v\0\y\4\j\w\e\r\x\v\1\b\e\h\5\f\p\w\z\1\4\i\z\8\l\b\e\x\j\z\h\d\h\6\8\o\l\m\w\u\e\l\m\4\s\a\l\o\7\f\f\h\r\r\p\l\e\3\z\y\y\r\l\n\0\1\6\9\p\4\u\b\f\3\0\4\9\g\h\p\k\0\i\6\h\8\d\m\v\e\b\s\g\z\r\l\d\q\u\9\m\g\0\q\z\1\4\g\v\r\p\v\b\q\v\d\r\f\6\p\o\u\a\q\l\4\3\7\7\n\5\x\m\3\y\3\7\7\2\m\x\5\2\i\d\v\3\r\6\f\8\y\w\p\h\k\k\e\u\7\t\u\b\8\e\z\c\4\d\u\c\c\m\5\3\7\p\u\m\n\k\z\r\r\s\a\9\h\c\a\f\l\9\6\3\a\x\p\h\i\u\c\m\5\8\s\m\z\r\s\j\i\d\p\0\p\i\5\b\4\7\t\f\p\l\0\t\s\4\d\w\l\a\m\r\g\q\4\v\5\1\v\o\b\r\c\s\z\w\o\b\p\s\7\x\y\t\u\x\0\f\t\r\x\l\i\o\g\f\c\8\7\s\7\k\7\x\x\2\3\j\c\1\z\f\y\b\k\m\i\e\l\5\z\v\2\7\i\u\2\b\5\v\o\b\k\h\k\g\m\c\d\q\d\k\a\h\l\f\d\h\l\n\c\v\u\v\j\j\y\w\1\s\4\p\k\2\s\r\0\w\d\d\9\6\5\b\w\e\x\7\0\y\w\c\0\z\i\0\e\g\4\6\7\k\9\a\0\1\y\t\0\g\d\h\3\y\7\y\5\9\m\v\l\b\l\i\l\d\2\z\6\2\w\m\q\t\d\4\t\3\u\g\1\t\e\q\l\1\6\z\m\q\0\i\n\7\i\o\9\p\k\9\m\b\s\p\q\0\o\k\p\y\d\a ]] 00:31:05.700 12:50:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:05.700 12:50:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:05.700 [2024-10-01 12:50:48.212269] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:05.700 [2024-10-01 12:50:48.212474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135574 ] 00:31:05.958 [2024-10-01 12:50:48.389443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.216 [2024-10-01 12:50:48.650247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.285  Copying: 512/512 [B] (average 250 kBps) 00:31:08.285 00:31:08.285 12:50:50 -- dd/posix.sh@93 -- # [[ 77t43i1k2yc8txkhl1wj5r5vvsy6xreusdrt672qfr6tmpbc0petsv56923hbst9mk8tqnt15344rv0y4jwerxv1beh5fpwz14iz8lbexjzhdh68olmwuelm4salo7ffhrrple3zyyrln0169p4ubf3049ghpk0i6h8dmvebsgzrldqu9mg0qz14gvrpvbqvdrf6pouaql4377n5xm3y3772mx52idv3r6f8ywphkkeu7tub8ezc4duccm537pumnkzrrsa9hcafl963axphiucm58smzrsjidp0pi5b47tfpl0ts4dwlamrgq4v51vobrcszwobps7xytux0ftrxliogfc87s7k7xx23jc1zfybkmiel5zv27iu2b5vobkhkgmcdqdkahlfdhlncvuvjjyw1s4pk2sr0wdd965bwex70ywc0zi0eg467k9a01yt0gdh3y7y59mvlblild2z62wmqtd4t3ug1teql16zmq0in7io9pk9mbspq0okpyda == \7\7\t\4\3\i\1\k\2\y\c\8\t\x\k\h\l\1\w\j\5\r\5\v\v\s\y\6\x\r\e\u\s\d\r\t\6\7\2\q\f\r\6\t\m\p\b\c\0\p\e\t\s\v\5\6\9\2\3\h\b\s\t\9\m\k\8\t\q\n\t\1\5\3\4\4\r\v\0\y\4\j\w\e\r\x\v\1\b\e\h\5\f\p\w\z\1\4\i\z\8\l\b\e\x\j\z\h\d\h\6\8\o\l\m\w\u\e\l\m\4\s\a\l\o\7\f\f\h\r\r\p\l\e\3\z\y\y\r\l\n\0\1\6\9\p\4\u\b\f\3\0\4\9\g\h\p\k\0\i\6\h\8\d\m\v\e\b\s\g\z\r\l\d\q\u\9\m\g\0\q\z\1\4\g\v\r\p\v\b\q\v\d\r\f\6\p\o\u\a\q\l\4\3\7\7\n\5\x\m\3\y\3\7\7\2\m\x\5\2\i\d\v\3\r\6\f\8\y\w\p\h\k\k\e\u\7\t\u\b\8\e\z\c\4\d\u\c\c\m\5\3\7\p\u\m\n\k\z\r\r\s\a\9\h\c\a\f\l\9\6\3\a\x\p\h\i\u\c\m\5\8\s\m\z\r\s\j\i\d\p\0\p\i\5\b\4\7\t\f\p\l\0\t\s\4\d\w\l\a\m\r\g\q\4\v\5\1\v\o\b\r\c\s\z\w\o\b\p\s\7\x\y\t\u\x\0\f\t\r\x\l\i\o\g\f\c\8\7\s\7\k\7\x\x\2\3\j\c\1\z\f\y\b\k\m\i\e\l\5\z\v\2\7\i\u\2\b\5\v\o\b\k\h\k\g\m\c\d\q\d\k\a\h\l\f\d\h\l\n\c\v\u\v\j\j\y\w\1\s\4\p\k\2\s\r\0\w\d\d\9\6\5\b\w\e\x\7\0\y\w\c\0\z\i\0\e\g\4\6\7\k\9\a\0\1\y\t\0\g\d\h\3\y\7\y\5\9\m\v\l\b\l\i\l\d\2\z\6\2\w\m\q\t\d\4\t\3\u\g\1\t\e\q\l\1\6\z\m\q\0\i\n\7\i\o\9\p\k\9\m\b\s\p\q\0\o\k\p\y\d\a ]] 00:31:08.285 12:50:50 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:08.285 12:50:50 -- dd/posix.sh@86 -- # gen_bytes 512 00:31:08.285 12:50:50 -- dd/common.sh@98 -- # xtrace_disable 00:31:08.285 12:50:50 -- common/autotest_common.sh@10 -- # set +x 00:31:08.285 12:50:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:08.285 12:50:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:08.285 [2024-10-01 12:50:50.637115] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:08.285 [2024-10-01 12:50:50.637293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135605 ] 00:31:08.285 [2024-10-01 12:50:50.807603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.544 [2024-10-01 12:50:51.064529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.486  Copying: 512/512 [B] (average 500 kBps) 00:31:10.486 00:31:10.486 12:50:52 -- dd/posix.sh@93 -- # [[ fxvbzt40n6weyzpof5ssd59t17lhbirvnwsrs5kzckc2kkes2famzang13n1791rav4v3iwghzte95udq6nf2lqy0y36eww1oatzf1bb27toqms06etu84lpk8t6rk53vb6ksgu3tcyjxtxxsquuc0547gq87nz715io065fzp5btayity649xu94s7ik4a5kfdk9485lhyaj6kefv45pbrk44osx7496mavdn1xp1tos3bdzl320s1v4r0obv5p125lcn0snjgy6qu7iqx5ifdirsi38b0v367rjfv9c6o2g3s0n35hqzwrdwidxqr3dcfy6e08rgmhr9v36q4uhxs5dhuocv0l8v2blc0idu7em5lflqov7w4juu23cg4x8hi2iea5lxupnyeofhz1wog833tunrl079e3pcb6fqfm7wzjaq1s3ucb6qph97bkjh4s0fqnwd92d7fx7vgu17hivheiejtpt03jb5hlxwb9okdj8l46sfd42v0epllu == \f\x\v\b\z\t\4\0\n\6\w\e\y\z\p\o\f\5\s\s\d\5\9\t\1\7\l\h\b\i\r\v\n\w\s\r\s\5\k\z\c\k\c\2\k\k\e\s\2\f\a\m\z\a\n\g\1\3\n\1\7\9\1\r\a\v\4\v\3\i\w\g\h\z\t\e\9\5\u\d\q\6\n\f\2\l\q\y\0\y\3\6\e\w\w\1\o\a\t\z\f\1\b\b\2\7\t\o\q\m\s\0\6\e\t\u\8\4\l\p\k\8\t\6\r\k\5\3\v\b\6\k\s\g\u\3\t\c\y\j\x\t\x\x\s\q\u\u\c\0\5\4\7\g\q\8\7\n\z\7\1\5\i\o\0\6\5\f\z\p\5\b\t\a\y\i\t\y\6\4\9\x\u\9\4\s\7\i\k\4\a\5\k\f\d\k\9\4\8\5\l\h\y\a\j\6\k\e\f\v\4\5\p\b\r\k\4\4\o\s\x\7\4\9\6\m\a\v\d\n\1\x\p\1\t\o\s\3\b\d\z\l\3\2\0\s\1\v\4\r\0\o\b\v\5\p\1\2\5\l\c\n\0\s\n\j\g\y\6\q\u\7\i\q\x\5\i\f\d\i\r\s\i\3\8\b\0\v\3\6\7\r\j\f\v\9\c\6\o\2\g\3\s\0\n\3\5\h\q\z\w\r\d\w\i\d\x\q\r\3\d\c\f\y\6\e\0\8\r\g\m\h\r\9\v\3\6\q\4\u\h\x\s\5\d\h\u\o\c\v\0\l\8\v\2\b\l\c\0\i\d\u\7\e\m\5\l\f\l\q\o\v\7\w\4\j\u\u\2\3\c\g\4\x\8\h\i\2\i\e\a\5\l\x\u\p\n\y\e\o\f\h\z\1\w\o\g\8\3\3\t\u\n\r\l\0\7\9\e\3\p\c\b\6\f\q\f\m\7\w\z\j\a\q\1\s\3\u\c\b\6\q\p\h\9\7\b\k\j\h\4\s\0\f\q\n\w\d\9\2\d\7\f\x\7\v\g\u\1\7\h\i\v\h\e\i\e\j\t\p\t\0\3\j\b\5\h\l\x\w\b\9\o\k\d\j\8\l\4\6\s\f\d\4\2\v\0\e\p\l\l\u ]] 00:31:10.486 12:50:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:10.486 12:50:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:10.486 [2024-10-01 12:50:53.017811] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:10.486 [2024-10-01 12:50:53.017980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135636 ] 00:31:10.746 [2024-10-01 12:50:53.189736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.005 [2024-10-01 12:50:53.459680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.023  Copying: 512/512 [B] (average 500 kBps) 00:31:13.023 00:31:13.023 12:50:55 -- dd/posix.sh@93 -- # [[ fxvbzt40n6weyzpof5ssd59t17lhbirvnwsrs5kzckc2kkes2famzang13n1791rav4v3iwghzte95udq6nf2lqy0y36eww1oatzf1bb27toqms06etu84lpk8t6rk53vb6ksgu3tcyjxtxxsquuc0547gq87nz715io065fzp5btayity649xu94s7ik4a5kfdk9485lhyaj6kefv45pbrk44osx7496mavdn1xp1tos3bdzl320s1v4r0obv5p125lcn0snjgy6qu7iqx5ifdirsi38b0v367rjfv9c6o2g3s0n35hqzwrdwidxqr3dcfy6e08rgmhr9v36q4uhxs5dhuocv0l8v2blc0idu7em5lflqov7w4juu23cg4x8hi2iea5lxupnyeofhz1wog833tunrl079e3pcb6fqfm7wzjaq1s3ucb6qph97bkjh4s0fqnwd92d7fx7vgu17hivheiejtpt03jb5hlxwb9okdj8l46sfd42v0epllu == \f\x\v\b\z\t\4\0\n\6\w\e\y\z\p\o\f\5\s\s\d\5\9\t\1\7\l\h\b\i\r\v\n\w\s\r\s\5\k\z\c\k\c\2\k\k\e\s\2\f\a\m\z\a\n\g\1\3\n\1\7\9\1\r\a\v\4\v\3\i\w\g\h\z\t\e\9\5\u\d\q\6\n\f\2\l\q\y\0\y\3\6\e\w\w\1\o\a\t\z\f\1\b\b\2\7\t\o\q\m\s\0\6\e\t\u\8\4\l\p\k\8\t\6\r\k\5\3\v\b\6\k\s\g\u\3\t\c\y\j\x\t\x\x\s\q\u\u\c\0\5\4\7\g\q\8\7\n\z\7\1\5\i\o\0\6\5\f\z\p\5\b\t\a\y\i\t\y\6\4\9\x\u\9\4\s\7\i\k\4\a\5\k\f\d\k\9\4\8\5\l\h\y\a\j\6\k\e\f\v\4\5\p\b\r\k\4\4\o\s\x\7\4\9\6\m\a\v\d\n\1\x\p\1\t\o\s\3\b\d\z\l\3\2\0\s\1\v\4\r\0\o\b\v\5\p\1\2\5\l\c\n\0\s\n\j\g\y\6\q\u\7\i\q\x\5\i\f\d\i\r\s\i\3\8\b\0\v\3\6\7\r\j\f\v\9\c\6\o\2\g\3\s\0\n\3\5\h\q\z\w\r\d\w\i\d\x\q\r\3\d\c\f\y\6\e\0\8\r\g\m\h\r\9\v\3\6\q\4\u\h\x\s\5\d\h\u\o\c\v\0\l\8\v\2\b\l\c\0\i\d\u\7\e\m\5\l\f\l\q\o\v\7\w\4\j\u\u\2\3\c\g\4\x\8\h\i\2\i\e\a\5\l\x\u\p\n\y\e\o\f\h\z\1\w\o\g\8\3\3\t\u\n\r\l\0\7\9\e\3\p\c\b\6\f\q\f\m\7\w\z\j\a\q\1\s\3\u\c\b\6\q\p\h\9\7\b\k\j\h\4\s\0\f\q\n\w\d\9\2\d\7\f\x\7\v\g\u\1\7\h\i\v\h\e\i\e\j\t\p\t\0\3\j\b\5\h\l\x\w\b\9\o\k\d\j\8\l\4\6\s\f\d\4\2\v\0\e\p\l\l\u ]] 00:31:13.023 12:50:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:13.023 12:50:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:13.023 [2024-10-01 12:50:55.438608] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:13.023 [2024-10-01 12:50:55.438799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135670 ] 00:31:13.281 [2024-10-01 12:50:55.613766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.540 [2024-10-01 12:50:55.874631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.177  Copying: 512/512 [B] (average 166 kBps) 00:31:15.177 00:31:15.177 12:50:57 -- dd/posix.sh@93 -- # [[ fxvbzt40n6weyzpof5ssd59t17lhbirvnwsrs5kzckc2kkes2famzang13n1791rav4v3iwghzte95udq6nf2lqy0y36eww1oatzf1bb27toqms06etu84lpk8t6rk53vb6ksgu3tcyjxtxxsquuc0547gq87nz715io065fzp5btayity649xu94s7ik4a5kfdk9485lhyaj6kefv45pbrk44osx7496mavdn1xp1tos3bdzl320s1v4r0obv5p125lcn0snjgy6qu7iqx5ifdirsi38b0v367rjfv9c6o2g3s0n35hqzwrdwidxqr3dcfy6e08rgmhr9v36q4uhxs5dhuocv0l8v2blc0idu7em5lflqov7w4juu23cg4x8hi2iea5lxupnyeofhz1wog833tunrl079e3pcb6fqfm7wzjaq1s3ucb6qph97bkjh4s0fqnwd92d7fx7vgu17hivheiejtpt03jb5hlxwb9okdj8l46sfd42v0epllu == \f\x\v\b\z\t\4\0\n\6\w\e\y\z\p\o\f\5\s\s\d\5\9\t\1\7\l\h\b\i\r\v\n\w\s\r\s\5\k\z\c\k\c\2\k\k\e\s\2\f\a\m\z\a\n\g\1\3\n\1\7\9\1\r\a\v\4\v\3\i\w\g\h\z\t\e\9\5\u\d\q\6\n\f\2\l\q\y\0\y\3\6\e\w\w\1\o\a\t\z\f\1\b\b\2\7\t\o\q\m\s\0\6\e\t\u\8\4\l\p\k\8\t\6\r\k\5\3\v\b\6\k\s\g\u\3\t\c\y\j\x\t\x\x\s\q\u\u\c\0\5\4\7\g\q\8\7\n\z\7\1\5\i\o\0\6\5\f\z\p\5\b\t\a\y\i\t\y\6\4\9\x\u\9\4\s\7\i\k\4\a\5\k\f\d\k\9\4\8\5\l\h\y\a\j\6\k\e\f\v\4\5\p\b\r\k\4\4\o\s\x\7\4\9\6\m\a\v\d\n\1\x\p\1\t\o\s\3\b\d\z\l\3\2\0\s\1\v\4\r\0\o\b\v\5\p\1\2\5\l\c\n\0\s\n\j\g\y\6\q\u\7\i\q\x\5\i\f\d\i\r\s\i\3\8\b\0\v\3\6\7\r\j\f\v\9\c\6\o\2\g\3\s\0\n\3\5\h\q\z\w\r\d\w\i\d\x\q\r\3\d\c\f\y\6\e\0\8\r\g\m\h\r\9\v\3\6\q\4\u\h\x\s\5\d\h\u\o\c\v\0\l\8\v\2\b\l\c\0\i\d\u\7\e\m\5\l\f\l\q\o\v\7\w\4\j\u\u\2\3\c\g\4\x\8\h\i\2\i\e\a\5\l\x\u\p\n\y\e\o\f\h\z\1\w\o\g\8\3\3\t\u\n\r\l\0\7\9\e\3\p\c\b\6\f\q\f\m\7\w\z\j\a\q\1\s\3\u\c\b\6\q\p\h\9\7\b\k\j\h\4\s\0\f\q\n\w\d\9\2\d\7\f\x\7\v\g\u\1\7\h\i\v\h\e\i\e\j\t\p\t\0\3\j\b\5\h\l\x\w\b\9\o\k\d\j\8\l\4\6\s\f\d\4\2\v\0\e\p\l\l\u ]] 00:31:15.177 12:50:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:15.177 12:50:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:15.436 [2024-10-01 12:50:57.789130] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:15.436 [2024-10-01 12:50:57.790185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135699 ] 00:31:15.436 [2024-10-01 12:50:57.965669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.727 [2024-10-01 12:50:58.221645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.673  Copying: 512/512 [B] (average 166 kBps) 00:31:17.673 00:31:17.673 ************************************ 00:31:17.673 END TEST dd_flags_misc_forced_aio 00:31:17.673 ************************************ 00:31:17.673 12:50:59 -- dd/posix.sh@93 -- # [[ fxvbzt40n6weyzpof5ssd59t17lhbirvnwsrs5kzckc2kkes2famzang13n1791rav4v3iwghzte95udq6nf2lqy0y36eww1oatzf1bb27toqms06etu84lpk8t6rk53vb6ksgu3tcyjxtxxsquuc0547gq87nz715io065fzp5btayity649xu94s7ik4a5kfdk9485lhyaj6kefv45pbrk44osx7496mavdn1xp1tos3bdzl320s1v4r0obv5p125lcn0snjgy6qu7iqx5ifdirsi38b0v367rjfv9c6o2g3s0n35hqzwrdwidxqr3dcfy6e08rgmhr9v36q4uhxs5dhuocv0l8v2blc0idu7em5lflqov7w4juu23cg4x8hi2iea5lxupnyeofhz1wog833tunrl079e3pcb6fqfm7wzjaq1s3ucb6qph97bkjh4s0fqnwd92d7fx7vgu17hivheiejtpt03jb5hlxwb9okdj8l46sfd42v0epllu == \f\x\v\b\z\t\4\0\n\6\w\e\y\z\p\o\f\5\s\s\d\5\9\t\1\7\l\h\b\i\r\v\n\w\s\r\s\5\k\z\c\k\c\2\k\k\e\s\2\f\a\m\z\a\n\g\1\3\n\1\7\9\1\r\a\v\4\v\3\i\w\g\h\z\t\e\9\5\u\d\q\6\n\f\2\l\q\y\0\y\3\6\e\w\w\1\o\a\t\z\f\1\b\b\2\7\t\o\q\m\s\0\6\e\t\u\8\4\l\p\k\8\t\6\r\k\5\3\v\b\6\k\s\g\u\3\t\c\y\j\x\t\x\x\s\q\u\u\c\0\5\4\7\g\q\8\7\n\z\7\1\5\i\o\0\6\5\f\z\p\5\b\t\a\y\i\t\y\6\4\9\x\u\9\4\s\7\i\k\4\a\5\k\f\d\k\9\4\8\5\l\h\y\a\j\6\k\e\f\v\4\5\p\b\r\k\4\4\o\s\x\7\4\9\6\m\a\v\d\n\1\x\p\1\t\o\s\3\b\d\z\l\3\2\0\s\1\v\4\r\0\o\b\v\5\p\1\2\5\l\c\n\0\s\n\j\g\y\6\q\u\7\i\q\x\5\i\f\d\i\r\s\i\3\8\b\0\v\3\6\7\r\j\f\v\9\c\6\o\2\g\3\s\0\n\3\5\h\q\z\w\r\d\w\i\d\x\q\r\3\d\c\f\y\6\e\0\8\r\g\m\h\r\9\v\3\6\q\4\u\h\x\s\5\d\h\u\o\c\v\0\l\8\v\2\b\l\c\0\i\d\u\7\e\m\5\l\f\l\q\o\v\7\w\4\j\u\u\2\3\c\g\4\x\8\h\i\2\i\e\a\5\l\x\u\p\n\y\e\o\f\h\z\1\w\o\g\8\3\3\t\u\n\r\l\0\7\9\e\3\p\c\b\6\f\q\f\m\7\w\z\j\a\q\1\s\3\u\c\b\6\q\p\h\9\7\b\k\j\h\4\s\0\f\q\n\w\d\9\2\d\7\f\x\7\v\g\u\1\7\h\i\v\h\e\i\e\j\t\p\t\0\3\j\b\5\h\l\x\w\b\9\o\k\d\j\8\l\4\6\s\f\d\4\2\v\0\e\p\l\l\u ]] 00:31:17.673 00:31:17.673 real 0m19.414s 00:31:17.673 user 0m15.549s 00:31:17.673 sys 0m2.783s 00:31:17.673 12:50:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.673 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:31:17.673 12:51:00 -- dd/posix.sh@1 -- # cleanup 00:31:17.673 12:51:00 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:17.673 12:51:00 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:17.673 00:31:17.673 real 1m17.977s 00:31:17.674 user 1m0.978s 00:31:17.674 sys 0m10.971s 00:31:17.674 12:51:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.674 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:31:17.674 ************************************ 00:31:17.674 END TEST spdk_dd_posix 00:31:17.674 ************************************ 00:31:17.674 12:51:00 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:31:17.674 12:51:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:17.674 12:51:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:17.674 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:31:17.674 ************************************ 00:31:17.674 START TEST spdk_dd_malloc 00:31:17.674 ************************************ 00:31:17.674 12:51:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:31:17.932 * Looking for test storage... 00:31:17.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:31:17.932 12:51:00 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:17.932 12:51:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.932 12:51:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.932 12:51:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.932 12:51:00 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:17.932 12:51:00 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:17.932 12:51:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:17.932 12:51:00 -- paths/export.sh@5 -- # export PATH 00:31:17.932 12:51:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:17.932 12:51:00 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:31:17.932 12:51:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:17.932 12:51:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:17.932 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:31:17.932 ************************************ 00:31:17.932 START TEST dd_malloc_copy 00:31:17.932 ************************************ 00:31:17.932 12:51:00 -- common/autotest_common.sh@1104 -- # malloc_copy 00:31:17.932 12:51:00 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:31:17.932 12:51:00 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:31:17.932 12:51:00 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:31:17.932 12:51:00 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:31:17.932 12:51:00 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:31:17.932 12:51:00 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:31:17.932 12:51:00 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:31:17.932 12:51:00 -- dd/malloc.sh@28 -- # gen_conf 00:31:17.932 12:51:00 -- dd/common.sh@31 -- # xtrace_disable 00:31:17.932 12:51:00 -- common/autotest_common.sh@10 -- # set +x 00:31:17.932 { 00:31:17.932 "subsystems": [ 00:31:17.932 { 00:31:17.932 "subsystem": "bdev", 00:31:17.932 "config": [ 00:31:17.932 { 00:31:17.932 "params": { 00:31:17.932 "block_size": 512, 00:31:17.932 "num_blocks": 1048576, 00:31:17.932 "name": "malloc0" 00:31:17.932 }, 00:31:17.933 "method": "bdev_malloc_create" 00:31:17.933 }, 00:31:17.933 { 00:31:17.933 "params": { 00:31:17.933 "block_size": 512, 00:31:17.933 "num_blocks": 1048576, 00:31:17.933 "name": "malloc1" 00:31:17.933 }, 00:31:17.933 "method": "bdev_malloc_create" 00:31:17.933 }, 00:31:17.933 { 00:31:17.933 "method": "bdev_wait_for_examine" 00:31:17.933 } 00:31:17.933 ] 00:31:17.933 } 00:31:17.933 ] 00:31:17.933 } 00:31:17.933 [2024-10-01 12:51:00.366815] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:17.933 [2024-10-01 12:51:00.367002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135806 ] 00:31:18.244 [2024-10-01 12:51:00.539137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.502 [2024-10-01 12:51:00.793478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.345  Copying: 230/512 [MB] (230 MBps) Copying: 463/512 [MB] (232 MBps) Copying: 512/512 [MB] (average 232 MBps) 00:31:27.345 00:31:27.345 12:51:09 -- dd/malloc.sh@33 -- # gen_conf 00:31:27.345 12:51:09 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:31:27.345 12:51:09 -- dd/common.sh@31 -- # xtrace_disable 00:31:27.345 12:51:09 -- common/autotest_common.sh@10 -- # set +x 00:31:27.345 { 00:31:27.345 "subsystems": [ 00:31:27.345 { 00:31:27.345 "subsystem": "bdev", 00:31:27.345 "config": [ 00:31:27.345 { 00:31:27.345 "params": { 00:31:27.345 "block_size": 512, 00:31:27.345 "num_blocks": 1048576, 00:31:27.345 "name": "malloc0" 00:31:27.345 }, 00:31:27.345 "method": "bdev_malloc_create" 00:31:27.345 }, 00:31:27.345 { 00:31:27.345 "params": { 00:31:27.345 "block_size": 512, 00:31:27.345 "num_blocks": 1048576, 00:31:27.345 "name": "malloc1" 00:31:27.345 }, 00:31:27.345 "method": "bdev_malloc_create" 00:31:27.345 }, 00:31:27.345 { 00:31:27.345 "method": "bdev_wait_for_examine" 00:31:27.345 } 00:31:27.345 ] 00:31:27.345 } 00:31:27.345 ] 00:31:27.345 } 00:31:27.345 [2024-10-01 12:51:09.095965] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:27.345 [2024-10-01 12:51:09.096129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135911 ] 00:31:27.345 [2024-10-01 12:51:09.253194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.345 [2024-10-01 12:51:09.531314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.332  Copying: 228/512 [MB] (228 MBps) Copying: 460/512 [MB] (232 MBps) Copying: 512/512 [MB] (average 231 MBps) 00:31:36.332 00:31:36.332 ************************************ 00:31:36.332 END TEST dd_malloc_copy 00:31:36.332 ************************************ 00:31:36.332 00:31:36.332 real 0m17.750s 00:31:36.332 user 0m16.039s 00:31:36.332 sys 0m1.586s 00:31:36.332 12:51:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.332 12:51:18 -- common/autotest_common.sh@10 -- # set +x 00:31:36.332 00:31:36.332 real 0m17.961s 00:31:36.332 user 0m16.130s 00:31:36.332 sys 0m1.718s 00:31:36.332 12:51:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.332 12:51:18 -- common/autotest_common.sh@10 -- # set +x 00:31:36.332 ************************************ 00:31:36.332 END TEST spdk_dd_malloc 00:31:36.332 ************************************ 00:31:36.332 12:51:18 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:31:36.332 12:51:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:36.332 12:51:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:36.332 12:51:18 -- common/autotest_common.sh@10 -- # set +x 00:31:36.332 ************************************ 00:31:36.332 START TEST spdk_dd_bdev_to_bdev 00:31:36.332 ************************************ 00:31:36.332 12:51:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:31:36.332 * Looking for test storage... 00:31:36.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:31:36.332 12:51:18 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:36.332 12:51:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.332 12:51:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.332 12:51:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.332 12:51:18 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:36.332 12:51:18 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:36.332 12:51:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:36.332 12:51:18 -- paths/export.sh@5 -- # export PATH 00:31:36.332 12:51:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:31:36.332 12:51:18 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:31:36.332 [2024-10-01 12:51:18.384751] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:36.332 [2024-10-01 12:51:18.385018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136079 ] 00:31:36.332 [2024-10-01 12:51:18.548485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.332 [2024-10-01 12:51:18.817011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.646  Copying: 256/256 [MB] (average 1003 MBps) 00:31:38.646 00:31:38.646 12:51:20 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:38.646 12:51:20 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:38.646 12:51:20 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:31:38.646 12:51:20 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:31:38.646 12:51:20 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:31:38.646 12:51:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:31:38.646 12:51:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:38.646 12:51:20 -- common/autotest_common.sh@10 -- # set +x 00:31:38.646 ************************************ 00:31:38.646 START TEST dd_inflate_file 00:31:38.646 ************************************ 00:31:38.646 12:51:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:31:38.646 [2024-10-01 12:51:20.997549] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:38.646 [2024-10-01 12:51:20.997732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136124 ] 00:31:38.646 [2024-10-01 12:51:21.162890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.904 [2024-10-01 12:51:21.418255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.845  Copying: 64/64 [MB] (average 984 MBps) 00:31:40.845 00:31:40.845 00:31:40.845 real 0m2.404s 00:31:40.845 user 0m1.904s 00:31:40.845 sys 0m0.349s 00:31:40.845 ************************************ 00:31:40.845 END TEST dd_inflate_file 00:31:40.845 ************************************ 00:31:40.845 12:51:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:40.845 12:51:23 -- common/autotest_common.sh@10 -- # set +x 00:31:41.193 12:51:23 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:31:41.193 12:51:23 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:31:41.193 12:51:23 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:31:41.193 12:51:23 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:31:41.193 12:51:23 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:31:41.193 12:51:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:41.193 12:51:23 -- common/autotest_common.sh@10 -- # set +x 00:31:41.193 12:51:23 -- dd/common.sh@31 -- # xtrace_disable 00:31:41.193 12:51:23 -- common/autotest_common.sh@10 -- # set +x 00:31:41.193 ************************************ 00:31:41.193 START TEST dd_copy_to_out_bdev 00:31:41.193 ************************************ 00:31:41.193 12:51:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:31:41.193 { 00:31:41.193 "subsystems": [ 00:31:41.193 { 00:31:41.193 "subsystem": "bdev", 00:31:41.193 "config": [ 00:31:41.193 { 00:31:41.193 "params": { 00:31:41.193 "block_size": 4096, 00:31:41.193 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:31:41.193 "name": "aio1" 00:31:41.193 }, 00:31:41.193 "method": "bdev_aio_create" 00:31:41.193 }, 00:31:41.193 { 00:31:41.193 "params": { 00:31:41.193 "trtype": "pcie", 00:31:41.193 "traddr": "0000:00:06.0", 00:31:41.193 "name": "Nvme0" 00:31:41.193 }, 00:31:41.193 "method": "bdev_nvme_attach_controller" 00:31:41.193 }, 00:31:41.193 { 00:31:41.193 "method": "bdev_wait_for_examine" 00:31:41.193 } 00:31:41.193 ] 00:31:41.193 } 00:31:41.193 ] 00:31:41.193 } 00:31:41.193 [2024-10-01 12:51:23.502971] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:41.194 [2024-10-01 12:51:23.503135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136181 ] 00:31:41.194 [2024-10-01 12:51:23.677291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.453 [2024-10-01 12:51:23.936597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.733  Copying: 62/64 [MB] (62 MBps) Copying: 64/64 [MB] (average 62 MBps) 00:31:44.733 00:31:44.733 00:31:44.733 real 0m3.620s 00:31:44.733 user 0m3.131s 00:31:44.733 sys 0m0.399s 00:31:44.733 12:51:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:44.733 12:51:27 -- common/autotest_common.sh@10 -- # set +x 00:31:44.733 ************************************ 00:31:44.733 END TEST dd_copy_to_out_bdev 00:31:44.733 ************************************ 00:31:44.733 12:51:27 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:31:44.733 12:51:27 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:31:44.733 12:51:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:44.733 12:51:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:44.733 12:51:27 -- common/autotest_common.sh@10 -- # set +x 00:31:44.733 ************************************ 00:31:44.733 START TEST dd_offset_magic 00:31:44.733 ************************************ 00:31:44.733 12:51:27 -- common/autotest_common.sh@1104 -- # offset_magic 00:31:44.733 12:51:27 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:31:44.733 12:51:27 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:31:44.733 12:51:27 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:31:44.733 12:51:27 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:31:44.733 12:51:27 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:31:44.733 12:51:27 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:31:44.733 12:51:27 -- dd/common.sh@31 -- # xtrace_disable 00:31:44.733 12:51:27 -- common/autotest_common.sh@10 -- # set +x 00:31:44.733 { 00:31:44.733 "subsystems": [ 00:31:44.733 { 00:31:44.733 "subsystem": "bdev", 00:31:44.733 "config": [ 00:31:44.733 { 00:31:44.733 "params": { 00:31:44.733 "block_size": 4096, 00:31:44.733 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:31:44.733 "name": "aio1" 00:31:44.733 }, 00:31:44.733 "method": "bdev_aio_create" 00:31:44.733 }, 00:31:44.733 { 00:31:44.733 "params": { 00:31:44.733 "trtype": "pcie", 00:31:44.733 "traddr": "0000:00:06.0", 00:31:44.733 "name": "Nvme0" 00:31:44.733 }, 00:31:44.733 "method": "bdev_nvme_attach_controller" 00:31:44.733 }, 00:31:44.733 { 00:31:44.733 "method": "bdev_wait_for_examine" 00:31:44.733 } 00:31:44.733 ] 00:31:44.733 } 00:31:44.733 ] 00:31:44.733 } 00:31:44.733 [2024-10-01 12:51:27.210456] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:44.733 [2024-10-01 12:51:27.210663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136254 ] 00:31:44.992 [2024-10-01 12:51:27.390666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.251 [2024-10-01 12:51:27.670105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.740  Copying: 65/65 [MB] (average 1120 MBps) 00:31:47.740 00:31:47.740 12:51:29 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:31:47.740 12:51:29 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:31:47.740 12:51:29 -- dd/common.sh@31 -- # xtrace_disable 00:31:47.740 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:31:47.740 { 00:31:47.740 "subsystems": [ 00:31:47.740 { 00:31:47.740 "subsystem": "bdev", 00:31:47.740 "config": [ 00:31:47.740 { 00:31:47.740 "params": { 00:31:47.740 "block_size": 4096, 00:31:47.740 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:31:47.740 "name": "aio1" 00:31:47.740 }, 00:31:47.740 "method": "bdev_aio_create" 00:31:47.740 }, 00:31:47.740 { 00:31:47.740 "params": { 00:31:47.740 "trtype": "pcie", 00:31:47.740 "traddr": "0000:00:06.0", 00:31:47.740 "name": "Nvme0" 00:31:47.740 }, 00:31:47.740 "method": "bdev_nvme_attach_controller" 00:31:47.740 }, 00:31:47.740 { 00:31:47.740 "method": "bdev_wait_for_examine" 00:31:47.740 } 00:31:47.740 ] 00:31:47.740 } 00:31:47.740 ] 00:31:47.740 } 00:31:47.740 [2024-10-01 12:51:29.906655] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:47.740 [2024-10-01 12:51:29.906840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136292 ] 00:31:47.740 [2024-10-01 12:51:30.078972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.999 [2024-10-01 12:51:30.363816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.947  Copying: 1024/1024 [kB] (average 500 MBps) 00:31:49.947 00:31:49.947 12:51:32 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:31:49.947 12:51:32 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:31:49.947 12:51:32 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:31:49.947 12:51:32 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:31:49.947 12:51:32 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:31:50.207 12:51:32 -- dd/common.sh@31 -- # xtrace_disable 00:31:50.207 12:51:32 -- common/autotest_common.sh@10 -- # set +x 00:31:50.207 { 00:31:50.207 "subsystems": [ 00:31:50.207 { 00:31:50.207 "subsystem": "bdev", 00:31:50.207 "config": [ 00:31:50.207 { 00:31:50.207 "params": { 00:31:50.207 "block_size": 4096, 00:31:50.207 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:31:50.207 "name": "aio1" 00:31:50.207 }, 00:31:50.207 "method": "bdev_aio_create" 00:31:50.207 }, 00:31:50.207 { 00:31:50.207 "params": { 00:31:50.207 "trtype": "pcie", 00:31:50.207 "traddr": "0000:00:06.0", 00:31:50.207 "name": "Nvme0" 00:31:50.207 }, 00:31:50.207 "method": "bdev_nvme_attach_controller" 00:31:50.207 }, 00:31:50.207 { 00:31:50.207 "method": "bdev_wait_for_examine" 00:31:50.207 } 00:31:50.207 ] 00:31:50.207 } 00:31:50.207 ] 00:31:50.207 } 00:31:50.207 [2024-10-01 12:51:32.557637] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:50.207 [2024-10-01 12:51:32.558186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136335 ] 00:31:50.207 [2024-10-01 12:51:32.735340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.465 [2024-10-01 12:51:32.999755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.783  Copying: 65/65 [MB] (average 1250 MBps) 00:31:52.783 00:31:52.783 12:51:34 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:31:52.783 12:51:34 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:31:52.783 12:51:34 -- dd/common.sh@31 -- # xtrace_disable 00:31:52.783 12:51:34 -- common/autotest_common.sh@10 -- # set +x 00:31:52.783 { 00:31:52.783 "subsystems": [ 00:31:52.783 { 00:31:52.783 "subsystem": "bdev", 00:31:52.783 "config": [ 00:31:52.783 { 00:31:52.783 "params": { 00:31:52.783 "block_size": 4096, 00:31:52.783 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:31:52.783 "name": "aio1" 00:31:52.783 }, 00:31:52.783 "method": "bdev_aio_create" 00:31:52.783 }, 00:31:52.783 { 00:31:52.783 "params": { 00:31:52.783 "trtype": "pcie", 00:31:52.783 "traddr": "0000:00:06.0", 00:31:52.783 "name": "Nvme0" 00:31:52.783 }, 00:31:52.783 "method": "bdev_nvme_attach_controller" 00:31:52.783 }, 00:31:52.783 { 00:31:52.783 "method": "bdev_wait_for_examine" 00:31:52.783 } 00:31:52.783 ] 00:31:52.783 } 00:31:52.783 ] 00:31:52.783 } 00:31:52.783 [2024-10-01 12:51:35.064259] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:52.783 [2024-10-01 12:51:35.064453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136369 ] 00:31:52.783 [2024-10-01 12:51:35.235064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.042 [2024-10-01 12:51:35.510654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.516  Copying: 1024/1024 [kB] (average 1000 MBps) 00:31:55.516 00:31:55.516 12:51:37 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:31:55.516 12:51:37 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:31:55.516 00:31:55.516 real 0m10.426s 00:31:55.516 user 0m8.365s 00:31:55.516 sys 0m1.619s 00:31:55.516 12:51:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:55.516 12:51:37 -- common/autotest_common.sh@10 -- # set +x 00:31:55.516 ************************************ 00:31:55.516 END TEST dd_offset_magic 00:31:55.516 ************************************ 00:31:55.516 12:51:37 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:31:55.516 12:51:37 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:31:55.516 12:51:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:31:55.516 12:51:37 -- dd/common.sh@11 -- # local nvme_ref= 00:31:55.516 12:51:37 -- dd/common.sh@12 -- # local size=4194330 00:31:55.516 12:51:37 -- dd/common.sh@14 -- # local bs=1048576 00:31:55.516 12:51:37 -- dd/common.sh@15 -- # local count=5 00:31:55.516 12:51:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:31:55.516 12:51:37 -- dd/common.sh@18 -- # gen_conf 00:31:55.516 12:51:37 -- dd/common.sh@31 -- # xtrace_disable 00:31:55.516 12:51:37 -- common/autotest_common.sh@10 -- # set +x 00:31:55.516 { 00:31:55.516 "subsystems": [ 00:31:55.516 { 00:31:55.516 "subsystem": "bdev", 00:31:55.516 "config": [ 00:31:55.517 { 00:31:55.517 "params": { 00:31:55.517 "block_size": 4096, 00:31:55.517 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:31:55.517 "name": "aio1" 00:31:55.517 }, 00:31:55.517 "method": "bdev_aio_create" 00:31:55.517 }, 00:31:55.517 { 00:31:55.517 "params": { 00:31:55.517 "trtype": "pcie", 00:31:55.517 "traddr": "0000:00:06.0", 00:31:55.517 "name": "Nvme0" 00:31:55.517 }, 00:31:55.517 "method": "bdev_nvme_attach_controller" 00:31:55.517 }, 00:31:55.517 { 00:31:55.517 "method": "bdev_wait_for_examine" 00:31:55.517 } 00:31:55.517 ] 00:31:55.517 } 00:31:55.517 ] 00:31:55.517 } 00:31:55.517 [2024-10-01 12:51:37.682906] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:55.517 [2024-10-01 12:51:37.683546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136425 ] 00:31:55.517 [2024-10-01 12:51:37.852665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.776 [2024-10-01 12:51:38.133197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.718  Copying: 5120/5120 [kB] (average 833 MBps) 00:31:57.718 00:31:57.718 12:51:40 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:31:57.718 12:51:40 -- dd/common.sh@10 -- # local bdev=aio1 00:31:57.718 12:51:40 -- dd/common.sh@11 -- # local nvme_ref= 00:31:57.718 12:51:40 -- dd/common.sh@12 -- # local size=4194330 00:31:57.718 12:51:40 -- dd/common.sh@14 -- # local bs=1048576 00:31:57.718 12:51:40 -- dd/common.sh@15 -- # local count=5 00:31:57.718 12:51:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:31:57.718 12:51:40 -- dd/common.sh@18 -- # gen_conf 00:31:57.718 12:51:40 -- dd/common.sh@31 -- # xtrace_disable 00:31:57.718 12:51:40 -- common/autotest_common.sh@10 -- # set +x 00:31:57.718 [2024-10-01 12:51:40.221193] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:57.718 [2024-10-01 12:51:40.221380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136459 ] 00:31:57.718 { 00:31:57.718 "subsystems": [ 00:31:57.718 { 00:31:57.718 "subsystem": "bdev", 00:31:57.718 "config": [ 00:31:57.718 { 00:31:57.718 "params": { 00:31:57.718 "block_size": 4096, 00:31:57.718 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:31:57.718 "name": "aio1" 00:31:57.718 }, 00:31:57.718 "method": "bdev_aio_create" 00:31:57.718 }, 00:31:57.718 { 00:31:57.718 "params": { 00:31:57.718 "trtype": "pcie", 00:31:57.718 "traddr": "0000:00:06.0", 00:31:57.718 "name": "Nvme0" 00:31:57.718 }, 00:31:57.718 "method": "bdev_nvme_attach_controller" 00:31:57.718 }, 00:31:57.718 { 00:31:57.718 "method": "bdev_wait_for_examine" 00:31:57.718 } 00:31:57.718 ] 00:31:57.718 } 00:31:57.718 ] 00:31:57.718 } 00:31:57.977 [2024-10-01 12:51:40.397958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.235 [2024-10-01 12:51:40.678426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.701  Copying: 5120/5120 [kB] (average 833 MBps) 00:32:00.701 00:32:00.701 12:51:42 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:32:00.701 00:32:00.701 real 0m24.787s 00:32:00.701 user 0m19.825s 00:32:00.701 sys 0m3.936s 00:32:00.701 12:51:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.701 ************************************ 00:32:00.701 END TEST spdk_dd_bdev_to_bdev 00:32:00.701 ************************************ 00:32:00.701 12:51:42 -- common/autotest_common.sh@10 -- # set +x 00:32:00.701 12:51:43 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:32:00.701 12:51:43 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:32:00.701 12:51:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:00.701 12:51:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:00.701 12:51:43 -- common/autotest_common.sh@10 -- # set +x 00:32:00.701 ************************************ 00:32:00.701 START TEST spdk_dd_sparse 00:32:00.701 ************************************ 00:32:00.701 12:51:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:32:00.701 * Looking for test storage... 00:32:00.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:32:00.701 12:51:43 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:00.701 12:51:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.701 12:51:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.701 12:51:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.701 12:51:43 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:00.701 12:51:43 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:00.701 12:51:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:00.701 12:51:43 -- paths/export.sh@5 -- # export PATH 00:32:00.701 12:51:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:00.701 12:51:43 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:32:00.701 12:51:43 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:32:00.701 12:51:43 -- dd/sparse.sh@110 -- # file1=file_zero1 00:32:00.701 12:51:43 -- dd/sparse.sh@111 -- # file2=file_zero2 00:32:00.701 12:51:43 -- dd/sparse.sh@112 -- # file3=file_zero3 00:32:00.701 12:51:43 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:32:00.701 12:51:43 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:32:00.701 12:51:43 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:32:00.701 12:51:43 -- dd/sparse.sh@118 -- # prepare 00:32:00.701 12:51:43 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:32:00.701 12:51:43 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:32:00.701 1+0 records in 00:32:00.701 1+0 records out 00:32:00.701 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00888412 s, 472 MB/s 00:32:00.701 12:51:43 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:32:00.701 1+0 records in 00:32:00.701 1+0 records out 00:32:00.701 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00841881 s, 498 MB/s 00:32:00.701 12:51:43 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:32:00.701 1+0 records in 00:32:00.701 1+0 records out 00:32:00.701 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0150012 s, 280 MB/s 00:32:00.701 12:51:43 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:32:00.701 12:51:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:00.701 12:51:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:00.701 12:51:43 -- common/autotest_common.sh@10 -- # set +x 00:32:00.701 ************************************ 00:32:00.701 START TEST dd_sparse_file_to_file 00:32:00.701 ************************************ 00:32:00.701 12:51:43 -- common/autotest_common.sh@1104 -- # file_to_file 00:32:00.701 12:51:43 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:32:00.701 12:51:43 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:32:00.701 12:51:43 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:32:00.701 12:51:43 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:32:00.701 12:51:43 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:32:00.701 12:51:43 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:32:00.701 12:51:43 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:32:00.702 12:51:43 -- dd/sparse.sh@41 -- # gen_conf 00:32:00.702 12:51:43 -- dd/common.sh@31 -- # xtrace_disable 00:32:00.702 12:51:43 -- common/autotest_common.sh@10 -- # set +x 00:32:00.961 { 00:32:00.961 "subsystems": [ 00:32:00.961 { 00:32:00.961 "subsystem": "bdev", 00:32:00.961 "config": [ 00:32:00.961 { 00:32:00.961 "params": { 00:32:00.961 "block_size": 4096, 00:32:00.961 "filename": "dd_sparse_aio_disk", 00:32:00.961 "name": "dd_aio" 00:32:00.961 }, 00:32:00.961 "method": "bdev_aio_create" 00:32:00.961 }, 00:32:00.961 { 00:32:00.961 "params": { 00:32:00.961 "lvs_name": "dd_lvstore", 00:32:00.961 "bdev_name": "dd_aio" 00:32:00.961 }, 00:32:00.961 "method": "bdev_lvol_create_lvstore" 00:32:00.961 }, 00:32:00.961 { 00:32:00.961 "method": "bdev_wait_for_examine" 00:32:00.961 } 00:32:00.961 ] 00:32:00.961 } 00:32:00.961 ] 00:32:00.961 } 00:32:00.961 [2024-10-01 12:51:43.296511] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:00.961 [2024-10-01 12:51:43.296892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136562 ] 00:32:00.961 [2024-10-01 12:51:43.469155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.528 [2024-10-01 12:51:43.766785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.689  Copying: 12/36 [MB] (average 480 MBps) 00:32:03.689 00:32:03.689 12:51:46 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:32:03.689 12:51:46 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:32:03.689 12:51:46 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:32:03.689 12:51:46 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:32:03.689 12:51:46 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:32:03.690 12:51:46 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:32:03.690 12:51:46 -- dd/sparse.sh@52 -- # stat1_b=24576 00:32:03.690 12:51:46 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:32:03.690 12:51:46 -- dd/sparse.sh@53 -- # stat2_b=24576 00:32:03.690 12:51:46 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:32:03.690 00:32:03.690 real 0m2.930s 00:32:03.690 user 0m2.308s 00:32:03.690 sys 0m0.459s 00:32:03.690 12:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.690 12:51:46 -- common/autotest_common.sh@10 -- # set +x 00:32:03.690 ************************************ 00:32:03.690 END TEST dd_sparse_file_to_file 00:32:03.690 ************************************ 00:32:03.690 12:51:46 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:32:03.690 12:51:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:03.690 12:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:03.690 12:51:46 -- common/autotest_common.sh@10 -- # set +x 00:32:03.690 ************************************ 00:32:03.690 START TEST dd_sparse_file_to_bdev 00:32:03.690 ************************************ 00:32:03.690 12:51:46 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:32:03.690 12:51:46 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:32:03.949 12:51:46 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:32:03.949 12:51:46 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:32:03.949 12:51:46 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:32:03.949 12:51:46 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:32:03.949 12:51:46 -- dd/sparse.sh@73 -- # gen_conf 00:32:03.949 12:51:46 -- dd/common.sh@31 -- # xtrace_disable 00:32:03.949 12:51:46 -- common/autotest_common.sh@10 -- # set +x 00:32:03.949 { 00:32:03.949 "subsystems": [ 00:32:03.949 { 00:32:03.949 "subsystem": "bdev", 00:32:03.949 "config": [ 00:32:03.949 { 00:32:03.949 "params": { 00:32:03.949 "block_size": 4096, 00:32:03.949 "filename": "dd_sparse_aio_disk", 00:32:03.949 "name": "dd_aio" 00:32:03.949 }, 00:32:03.949 "method": "bdev_aio_create" 00:32:03.949 }, 00:32:03.949 { 00:32:03.949 "params": { 00:32:03.949 "lvs_name": "dd_lvstore", 00:32:03.949 "lvol_name": "dd_lvol", 00:32:03.949 "size": 37748736, 00:32:03.949 "thin_provision": true 00:32:03.949 }, 00:32:03.949 "method": "bdev_lvol_create" 00:32:03.949 }, 00:32:03.949 { 00:32:03.949 "method": "bdev_wait_for_examine" 00:32:03.949 } 00:32:03.949 ] 00:32:03.949 } 00:32:03.949 ] 00:32:03.949 } 00:32:03.949 [2024-10-01 12:51:46.300091] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:03.949 [2024-10-01 12:51:46.300287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136633 ] 00:32:03.949 [2024-10-01 12:51:46.478902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.517 [2024-10-01 12:51:46.754714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.777 [2024-10-01 12:51:47.192354] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:32:04.777  Copying: 12/36 [MB] (average 461 MBps)[2024-10-01 12:51:47.267817] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:32:06.684 00:32:06.684 00:32:06.684 ************************************ 00:32:06.684 END TEST dd_sparse_file_to_bdev 00:32:06.684 ************************************ 00:32:06.684 00:32:06.684 real 0m2.825s 00:32:06.684 user 0m2.316s 00:32:06.684 sys 0m0.408s 00:32:06.684 12:51:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.684 12:51:49 -- common/autotest_common.sh@10 -- # set +x 00:32:06.684 12:51:49 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:32:06.684 12:51:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:06.684 12:51:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:06.684 12:51:49 -- common/autotest_common.sh@10 -- # set +x 00:32:06.684 ************************************ 00:32:06.684 START TEST dd_sparse_bdev_to_file 00:32:06.684 ************************************ 00:32:06.684 12:51:49 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:32:06.684 12:51:49 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:32:06.684 12:51:49 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:32:06.684 12:51:49 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:32:06.684 12:51:49 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:32:06.684 12:51:49 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:32:06.684 12:51:49 -- dd/sparse.sh@91 -- # gen_conf 00:32:06.684 12:51:49 -- dd/common.sh@31 -- # xtrace_disable 00:32:06.684 12:51:49 -- common/autotest_common.sh@10 -- # set +x 00:32:06.684 { 00:32:06.684 "subsystems": [ 00:32:06.684 { 00:32:06.684 "subsystem": "bdev", 00:32:06.684 "config": [ 00:32:06.684 { 00:32:06.684 "params": { 00:32:06.684 "block_size": 4096, 00:32:06.684 "filename": "dd_sparse_aio_disk", 00:32:06.684 "name": "dd_aio" 00:32:06.684 }, 00:32:06.684 "method": "bdev_aio_create" 00:32:06.684 }, 00:32:06.684 { 00:32:06.684 "method": "bdev_wait_for_examine" 00:32:06.684 } 00:32:06.684 ] 00:32:06.684 } 00:32:06.684 ] 00:32:06.684 } 00:32:06.684 [2024-10-01 12:51:49.186593] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:06.684 [2024-10-01 12:51:49.186868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136697 ] 00:32:06.943 [2024-10-01 12:51:49.365607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.201 [2024-10-01 12:51:49.680216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.690  Copying: 12/36 [MB] (average 666 MBps) 00:32:09.690 00:32:09.690 12:51:51 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:32:09.690 12:51:51 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:32:09.690 12:51:51 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:32:09.690 12:51:51 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:32:09.690 12:51:51 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:32:09.690 12:51:51 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:32:09.690 12:51:51 -- dd/sparse.sh@102 -- # stat2_b=24576 00:32:09.690 12:51:51 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:32:09.690 12:51:51 -- dd/sparse.sh@103 -- # stat3_b=24576 00:32:09.690 12:51:51 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:32:09.690 00:32:09.690 real 0m2.819s 00:32:09.690 user 0m2.237s 00:32:09.690 sys 0m0.469s 00:32:09.690 12:51:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.690 12:51:51 -- common/autotest_common.sh@10 -- # set +x 00:32:09.690 ************************************ 00:32:09.690 END TEST dd_sparse_bdev_to_file 00:32:09.690 ************************************ 00:32:09.690 12:51:51 -- dd/sparse.sh@1 -- # cleanup 00:32:09.690 12:51:51 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:32:09.690 12:51:51 -- dd/sparse.sh@12 -- # rm file_zero1 00:32:09.690 12:51:51 -- dd/sparse.sh@13 -- # rm file_zero2 00:32:09.690 12:51:51 -- dd/sparse.sh@14 -- # rm file_zero3 00:32:09.690 00:32:09.690 real 0m8.974s 00:32:09.690 user 0m7.018s 00:32:09.690 sys 0m1.576s 00:32:09.690 12:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.690 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:09.690 ************************************ 00:32:09.690 END TEST spdk_dd_sparse 00:32:09.690 ************************************ 00:32:09.690 12:51:52 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:32:09.690 12:51:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:09.690 12:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:09.690 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:09.690 ************************************ 00:32:09.690 START TEST spdk_dd_negative 00:32:09.690 ************************************ 00:32:09.690 12:51:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:32:09.690 * Looking for test storage... 00:32:09.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:32:09.690 12:51:52 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:09.690 12:51:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.690 12:51:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.690 12:51:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.690 12:51:52 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:09.690 12:51:52 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:09.690 12:51:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:09.690 12:51:52 -- paths/export.sh@5 -- # export PATH 00:32:09.690 12:51:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:09.690 12:51:52 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:09.690 12:51:52 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:09.690 12:51:52 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:09.690 12:51:52 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:09.690 12:51:52 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:32:09.690 12:51:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:09.690 12:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:09.690 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:09.690 ************************************ 00:32:09.690 START TEST dd_invalid_arguments 00:32:09.690 ************************************ 00:32:09.690 12:51:52 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:32:09.690 12:51:52 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:32:09.690 12:51:52 -- common/autotest_common.sh@640 -- # local es=0 00:32:09.690 12:51:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:32:09.690 12:51:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:09.690 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:09.690 12:51:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:09.690 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:09.690 12:51:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:09.967 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:09.967 12:51:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:09.967 12:51:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:09.967 12:51:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:32:09.967 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:32:09.967 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:32:09.967 options: 00:32:09.967 -c, --config JSON config file (default none) 00:32:09.967 --json JSON config file (default none) 00:32:09.967 --json-ignore-init-errors 00:32:09.967 don't exit on invalid config entry 00:32:09.967 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:32:09.967 -g, --single-file-segments 00:32:09.967 force creating just one hugetlbfs file 00:32:09.967 -h, --help show this usage 00:32:09.967 -i, --shm-id shared memory ID (optional) 00:32:09.967 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:32:09.967 --lcores lcore to CPU mapping list. The list is in the format: 00:32:09.967 [<,lcores[@CPUs]>...] 00:32:09.967 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:32:09.967 Within the group, '-' is used for range separator, 00:32:09.967 ',' is used for single number separator. 00:32:09.967 '( )' can be omitted for single element group, 00:32:09.967 '@' can be omitted if cpus and lcores have the same value 00:32:09.967 -n, --mem-channels channel number of memory channels used for DPDK 00:32:09.967 -p, --main-core main (primary) core for DPDK 00:32:09.967 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:32:09.967 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:32:09.967 --disable-cpumask-locks Disable CPU core lock files. 00:32:09.967 --silence-noticelog disable notice level logging to stderr 00:32:09.967 --msg-mempool-size global message memory pool size in count (default: 262143) 00:32:09.967 -u, --no-pci disable PCI access 00:32:09.967 --wait-for-rpc wait for RPCs to initialize subsystems 00:32:09.967 --max-delay maximum reactor delay (in microseconds) 00:32:09.967 -B, --pci-blocked pci addr to block (can be used more than once) 00:32:09.967 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:32:09.967 -R, --huge-unlink unlink huge files after initialization 00:32:09.967 -v, --version print SPDK version 00:32:09.967 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:32:09.967 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:32:09.967 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:32:09.967 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:32:09.967 Tracepoints vary in size and can use more than one trace entry. 00:32:09.967 --rpcs-allowed comma-separated list of permitted RPCS 00:32:09.967 --env-context Opaque context for use of the env implementation 00:32:09.967 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:32:09.967 --no-huge run without using hugepages 00:32:09.967 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:32:09.967 -e, --tpoint-group [:] 00:32:09.967 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:32:09.967 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:32:09.967 Groups and [2024-10-01 12:51:52.279567] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:32:09.967 masks can be combined (e.g. thread,bdev:0x1). 00:32:09.967 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:32:09.967 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:32:09.967 [--------- DD Options ---------] 00:32:09.967 --if Input file. Must specify either --if or --ib. 00:32:09.967 --ib Input bdev. Must specifier either --if or --ib 00:32:09.967 --of Output file. Must specify either --of or --ob. 00:32:09.967 --ob Output bdev. Must specify either --of or --ob. 00:32:09.967 --iflag Input file flags. 00:32:09.967 --oflag Output file flags. 00:32:09.967 --bs I/O unit size (default: 4096) 00:32:09.967 --qd Queue depth (default: 2) 00:32:09.967 --count I/O unit count. The number of I/O units to copy. (default: all) 00:32:09.967 --skip Skip this many I/O units at start of input. (default: 0) 00:32:09.967 --seek Skip this many I/O units at start of output. (default: 0) 00:32:09.967 --aio Force usage of AIO. (by default io_uring is used if available) 00:32:09.967 --sparse Enable hole skipping in input target 00:32:09.967 Available iflag and oflag values: 00:32:09.967 append - append mode 00:32:09.967 direct - use direct I/O for data 00:32:09.967 directory - fail unless a directory 00:32:09.967 dsync - use synchronized I/O for data 00:32:09.967 noatime - do not update access time 00:32:09.967 noctty - do not assign controlling terminal from file 00:32:09.967 nofollow - do not follow symlinks 00:32:09.967 nonblock - use non-blocking I/O 00:32:09.967 sync - use synchronized I/O for data and metadata 00:32:09.967 12:51:52 -- common/autotest_common.sh@643 -- # es=2 00:32:09.967 12:51:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:09.967 12:51:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:09.967 12:51:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:09.967 00:32:09.967 real 0m0.125s 00:32:09.967 user 0m0.076s 00:32:09.967 sys 0m0.049s 00:32:09.967 12:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.967 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:09.967 ************************************ 00:32:09.967 END TEST dd_invalid_arguments 00:32:09.967 ************************************ 00:32:09.967 12:51:52 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:32:09.967 12:51:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:09.967 12:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:09.967 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:09.967 ************************************ 00:32:09.967 START TEST dd_double_input 00:32:09.968 ************************************ 00:32:09.968 12:51:52 -- common/autotest_common.sh@1104 -- # double_input 00:32:09.968 12:51:52 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:32:09.968 12:51:52 -- common/autotest_common.sh@640 -- # local es=0 00:32:09.968 12:51:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:32:09.968 12:51:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:09.968 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:09.968 12:51:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:09.968 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:09.968 12:51:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:09.968 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:09.968 12:51:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:09.968 12:51:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:09.968 12:51:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:32:09.968 [2024-10-01 12:51:52.452740] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:32:10.227 12:51:52 -- common/autotest_common.sh@643 -- # es=22 00:32:10.227 12:51:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:10.227 12:51:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:10.227 12:51:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:10.227 00:32:10.227 real 0m0.121s 00:32:10.227 user 0m0.057s 00:32:10.227 sys 0m0.065s 00:32:10.227 12:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:10.227 ************************************ 00:32:10.227 END TEST dd_double_input 00:32:10.227 ************************************ 00:32:10.227 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:10.227 12:51:52 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:32:10.227 12:51:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:10.227 12:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:10.227 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:10.227 ************************************ 00:32:10.227 START TEST dd_double_output 00:32:10.227 ************************************ 00:32:10.227 12:51:52 -- common/autotest_common.sh@1104 -- # double_output 00:32:10.227 12:51:52 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:32:10.227 12:51:52 -- common/autotest_common.sh@640 -- # local es=0 00:32:10.227 12:51:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:32:10.227 12:51:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.227 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.227 12:51:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.227 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.227 12:51:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.227 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.227 12:51:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.227 12:51:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:10.227 12:51:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:32:10.227 [2024-10-01 12:51:52.655147] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:32:10.227 12:51:52 -- common/autotest_common.sh@643 -- # es=22 00:32:10.227 12:51:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:10.227 ************************************ 00:32:10.227 END TEST dd_double_output 00:32:10.227 12:51:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:10.227 12:51:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:10.227 00:32:10.227 real 0m0.151s 00:32:10.227 user 0m0.079s 00:32:10.227 sys 0m0.072s 00:32:10.227 12:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:10.227 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:10.227 ************************************ 00:32:10.486 12:51:52 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:32:10.486 12:51:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:10.486 12:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:10.486 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:10.486 ************************************ 00:32:10.486 START TEST dd_no_input 00:32:10.486 ************************************ 00:32:10.486 12:51:52 -- common/autotest_common.sh@1104 -- # no_input 00:32:10.486 12:51:52 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:32:10.486 12:51:52 -- common/autotest_common.sh@640 -- # local es=0 00:32:10.486 12:51:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:32:10.486 12:51:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.486 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.486 12:51:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.486 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.486 12:51:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.486 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.486 12:51:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.486 12:51:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:10.486 12:51:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:32:10.486 [2024-10-01 12:51:52.877427] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:32:10.486 12:51:52 -- common/autotest_common.sh@643 -- # es=22 00:32:10.486 12:51:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:10.486 12:51:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:10.486 12:51:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:10.486 00:32:10.486 real 0m0.159s 00:32:10.486 user 0m0.068s 00:32:10.486 sys 0m0.091s 00:32:10.486 12:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:10.486 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:10.486 ************************************ 00:32:10.486 END TEST dd_no_input 00:32:10.486 ************************************ 00:32:10.486 12:51:52 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:32:10.486 12:51:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:10.486 12:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:10.486 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:32:10.486 ************************************ 00:32:10.486 START TEST dd_no_output 00:32:10.486 ************************************ 00:32:10.486 12:51:52 -- common/autotest_common.sh@1104 -- # no_output 00:32:10.486 12:51:52 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:10.486 12:51:52 -- common/autotest_common.sh@640 -- # local es=0 00:32:10.486 12:51:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:10.486 12:51:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.486 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.486 12:51:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.486 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.486 12:51:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.486 12:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.486 12:51:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.486 12:51:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:10.486 12:51:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:10.745 [2024-10-01 12:51:53.073124] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:32:10.745 12:51:53 -- common/autotest_common.sh@643 -- # es=22 00:32:10.745 12:51:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:10.745 12:51:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:10.745 12:51:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:10.745 00:32:10.745 real 0m0.127s 00:32:10.745 user 0m0.074s 00:32:10.745 sys 0m0.054s 00:32:10.745 12:51:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:10.745 12:51:53 -- common/autotest_common.sh@10 -- # set +x 00:32:10.745 ************************************ 00:32:10.745 END TEST dd_no_output 00:32:10.745 ************************************ 00:32:10.745 12:51:53 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:32:10.745 12:51:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:10.745 12:51:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:10.745 12:51:53 -- common/autotest_common.sh@10 -- # set +x 00:32:10.745 ************************************ 00:32:10.745 START TEST dd_wrong_blocksize 00:32:10.745 ************************************ 00:32:10.745 12:51:53 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:32:10.745 12:51:53 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:32:10.745 12:51:53 -- common/autotest_common.sh@640 -- # local es=0 00:32:10.745 12:51:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:32:10.745 12:51:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.745 12:51:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.745 12:51:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.745 12:51:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.745 12:51:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.745 12:51:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:10.745 12:51:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.745 12:51:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:10.745 12:51:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:32:10.745 [2024-10-01 12:51:53.275648] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:32:11.004 12:51:53 -- common/autotest_common.sh@643 -- # es=22 00:32:11.004 12:51:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:11.004 12:51:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:11.004 12:51:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:11.004 00:32:11.004 real 0m0.137s 00:32:11.004 user 0m0.055s 00:32:11.004 sys 0m0.083s 00:32:11.004 12:51:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.004 12:51:53 -- common/autotest_common.sh@10 -- # set +x 00:32:11.004 ************************************ 00:32:11.004 END TEST dd_wrong_blocksize 00:32:11.004 ************************************ 00:32:11.004 12:51:53 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:32:11.004 12:51:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:11.004 12:51:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.004 12:51:53 -- common/autotest_common.sh@10 -- # set +x 00:32:11.004 ************************************ 00:32:11.004 START TEST dd_smaller_blocksize 00:32:11.004 ************************************ 00:32:11.004 12:51:53 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:32:11.004 12:51:53 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:32:11.004 12:51:53 -- common/autotest_common.sh@640 -- # local es=0 00:32:11.004 12:51:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:32:11.004 12:51:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:11.004 12:51:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:11.004 12:51:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:11.004 12:51:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:11.004 12:51:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:11.004 12:51:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:11.004 12:51:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:11.004 12:51:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:11.004 12:51:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:32:11.004 [2024-10-01 12:51:53.469543] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:11.004 [2024-10-01 12:51:53.470287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136975 ] 00:32:11.262 [2024-10-01 12:51:53.633115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.520 [2024-10-01 12:51:53.919646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.454 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:32:12.454 [2024-10-01 12:51:54.798083] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:32:12.454 [2024-10-01 12:51:54.798226] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:13.389 [2024-10-01 12:51:55.783521] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:32:13.956 12:51:56 -- common/autotest_common.sh@643 -- # es=244 00:32:13.956 12:51:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:13.956 12:51:56 -- common/autotest_common.sh@652 -- # es=116 00:32:13.956 12:51:56 -- common/autotest_common.sh@653 -- # case "$es" in 00:32:13.956 12:51:56 -- common/autotest_common.sh@660 -- # es=1 00:32:13.956 12:51:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:13.956 00:32:13.956 real 0m2.920s 00:32:13.956 user 0m2.115s 00:32:13.956 sys 0m0.704s 00:32:13.956 12:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:13.956 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:32:13.956 ************************************ 00:32:13.956 END TEST dd_smaller_blocksize 00:32:13.956 ************************************ 00:32:13.956 12:51:56 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:32:13.956 12:51:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:13.956 12:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:13.956 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:32:13.956 ************************************ 00:32:13.956 START TEST dd_invalid_count 00:32:13.956 ************************************ 00:32:13.956 12:51:56 -- common/autotest_common.sh@1104 -- # invalid_count 00:32:13.956 12:51:56 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:32:13.956 12:51:56 -- common/autotest_common.sh@640 -- # local es=0 00:32:13.956 12:51:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:32:13.956 12:51:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:13.956 12:51:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:13.956 12:51:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:13.956 12:51:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:13.956 12:51:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:13.956 12:51:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:13.956 12:51:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:13.956 12:51:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:13.956 12:51:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:32:13.956 [2024-10-01 12:51:56.476249] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:32:14.215 12:51:56 -- common/autotest_common.sh@643 -- # es=22 00:32:14.215 12:51:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:14.215 12:51:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:14.215 ************************************ 00:32:14.215 END TEST dd_invalid_count 00:32:14.215 ************************************ 00:32:14.215 12:51:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:14.215 00:32:14.215 real 0m0.142s 00:32:14.215 user 0m0.075s 00:32:14.215 sys 0m0.066s 00:32:14.215 12:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:14.215 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:32:14.215 12:51:56 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:32:14.215 12:51:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:14.215 12:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:14.215 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:32:14.215 ************************************ 00:32:14.215 START TEST dd_invalid_oflag 00:32:14.215 ************************************ 00:32:14.215 12:51:56 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:32:14.215 12:51:56 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:32:14.215 12:51:56 -- common/autotest_common.sh@640 -- # local es=0 00:32:14.215 12:51:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:32:14.215 12:51:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.215 12:51:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:14.215 12:51:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.215 12:51:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:14.215 12:51:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.215 12:51:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:14.215 12:51:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.215 12:51:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:14.215 12:51:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:32:14.215 [2024-10-01 12:51:56.683221] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:32:14.215 12:51:56 -- common/autotest_common.sh@643 -- # es=22 00:32:14.215 12:51:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:14.215 12:51:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:14.215 12:51:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:14.215 00:32:14.215 real 0m0.124s 00:32:14.215 user 0m0.050s 00:32:14.215 sys 0m0.074s 00:32:14.215 12:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:14.215 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:32:14.215 ************************************ 00:32:14.215 END TEST dd_invalid_oflag 00:32:14.215 ************************************ 00:32:14.473 12:51:56 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:32:14.473 12:51:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:14.473 12:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:14.473 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:32:14.473 ************************************ 00:32:14.473 START TEST dd_invalid_iflag 00:32:14.473 ************************************ 00:32:14.473 12:51:56 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:32:14.473 12:51:56 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:32:14.473 12:51:56 -- common/autotest_common.sh@640 -- # local es=0 00:32:14.473 12:51:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:32:14.473 12:51:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.473 12:51:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:14.473 12:51:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.473 12:51:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:14.473 12:51:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.473 12:51:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:14.473 12:51:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.473 12:51:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:14.473 12:51:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:32:14.473 [2024-10-01 12:51:56.887467] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:32:14.473 12:51:56 -- common/autotest_common.sh@643 -- # es=22 00:32:14.473 12:51:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:14.473 12:51:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:14.473 12:51:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:14.473 00:32:14.473 real 0m0.138s 00:32:14.473 user 0m0.067s 00:32:14.473 sys 0m0.072s 00:32:14.473 12:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:14.473 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:32:14.473 ************************************ 00:32:14.473 END TEST dd_invalid_iflag 00:32:14.473 ************************************ 00:32:14.473 12:51:56 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:32:14.473 12:51:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:14.473 12:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:14.473 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:32:14.732 ************************************ 00:32:14.732 START TEST dd_unknown_flag 00:32:14.732 ************************************ 00:32:14.732 12:51:57 -- common/autotest_common.sh@1104 -- # unknown_flag 00:32:14.732 12:51:57 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:32:14.732 12:51:57 -- common/autotest_common.sh@640 -- # local es=0 00:32:14.732 12:51:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:32:14.732 12:51:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.732 12:51:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:14.732 12:51:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.732 12:51:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:14.732 12:51:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.732 12:51:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:14.732 12:51:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.732 12:51:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:14.732 12:51:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:32:14.732 [2024-10-01 12:51:57.098551] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:14.732 [2024-10-01 12:51:57.098755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137119 ] 00:32:14.732 [2024-10-01 12:51:57.263599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.991 [2024-10-01 12:51:57.518475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.556 [2024-10-01 12:51:57.905034] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:32:15.556 [2024-10-01 12:51:57.905157] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:32:15.556 [2024-10-01 12:51:57.905184] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:32:15.556 [2024-10-01 12:51:57.905260] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:16.490 [2024-10-01 12:51:58.851785] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:32:17.054 12:51:59 -- common/autotest_common.sh@643 -- # es=236 00:32:17.054 12:51:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:17.054 12:51:59 -- common/autotest_common.sh@652 -- # es=108 00:32:17.054 12:51:59 -- common/autotest_common.sh@653 -- # case "$es" in 00:32:17.054 12:51:59 -- common/autotest_common.sh@660 -- # es=1 00:32:17.054 12:51:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:17.054 00:32:17.054 real 0m2.353s 00:32:17.054 user 0m1.922s 00:32:17.054 sys 0m0.330s 00:32:17.054 12:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:17.054 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:32:17.054 ************************************ 00:32:17.054 END TEST dd_unknown_flag 00:32:17.054 ************************************ 00:32:17.054 12:51:59 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:32:17.054 12:51:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:17.054 12:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:17.054 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:32:17.054 ************************************ 00:32:17.054 START TEST dd_invalid_json 00:32:17.054 ************************************ 00:32:17.054 12:51:59 -- common/autotest_common.sh@1104 -- # invalid_json 00:32:17.054 12:51:59 -- dd/negative_dd.sh@95 -- # : 00:32:17.054 12:51:59 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:32:17.054 12:51:59 -- common/autotest_common.sh@640 -- # local es=0 00:32:17.054 12:51:59 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:32:17.054 12:51:59 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:17.054 12:51:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:17.054 12:51:59 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:17.054 12:51:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:17.054 12:51:59 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:17.054 12:51:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:17.054 12:51:59 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:17.054 12:51:59 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:17.054 12:51:59 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:32:17.054 [2024-10-01 12:51:59.504415] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:17.054 [2024-10-01 12:51:59.504556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137172 ] 00:32:17.312 [2024-10-01 12:51:59.673238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.569 [2024-10-01 12:51:59.932324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.569 [2024-10-01 12:51:59.932580] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:32:17.569 [2024-10-01 12:51:59.932622] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:17.569 [2024-10-01 12:51:59.932703] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:32:18.135 12:52:00 -- common/autotest_common.sh@643 -- # es=234 00:32:18.135 12:52:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:18.135 12:52:00 -- common/autotest_common.sh@652 -- # es=106 00:32:18.135 12:52:00 -- common/autotest_common.sh@653 -- # case "$es" in 00:32:18.135 12:52:00 -- common/autotest_common.sh@660 -- # es=1 00:32:18.135 12:52:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:18.135 00:32:18.135 real 0m1.020s 00:32:18.135 user 0m0.769s 00:32:18.135 sys 0m0.154s 00:32:18.135 12:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.135 ************************************ 00:32:18.135 END TEST dd_invalid_json 00:32:18.135 ************************************ 00:32:18.135 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:32:18.135 00:32:18.135 real 0m8.428s 00:32:18.135 user 0m5.844s 00:32:18.135 sys 0m2.286s 00:32:18.135 12:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.135 ************************************ 00:32:18.135 END TEST spdk_dd_negative 00:32:18.135 ************************************ 00:32:18.135 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:32:18.135 00:32:18.135 real 3m14.024s 00:32:18.135 user 2m34.029s 00:32:18.135 sys 0m30.655s 00:32:18.135 12:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.135 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:32:18.135 ************************************ 00:32:18.135 END TEST spdk_dd 00:32:18.135 ************************************ 00:32:18.135 12:52:00 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:32:18.135 12:52:00 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:32:18.135 12:52:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:18.135 12:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:18.135 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:32:18.135 ************************************ 00:32:18.135 START TEST blockdev_nvme 00:32:18.135 ************************************ 00:32:18.135 12:52:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:32:18.394 * Looking for test storage... 00:32:18.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:32:18.394 12:52:00 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:32:18.394 12:52:00 -- bdev/nbd_common.sh@6 -- # set -e 00:32:18.394 12:52:00 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:32:18.394 12:52:00 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:18.394 12:52:00 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:32:18.394 12:52:00 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:32:18.394 12:52:00 -- bdev/blockdev.sh@18 -- # : 00:32:18.394 12:52:00 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:32:18.394 12:52:00 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:32:18.394 12:52:00 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:32:18.394 12:52:00 -- bdev/blockdev.sh@672 -- # uname -s 00:32:18.394 12:52:00 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:32:18.394 12:52:00 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:32:18.394 12:52:00 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:32:18.394 12:52:00 -- bdev/blockdev.sh@681 -- # crypto_device= 00:32:18.394 12:52:00 -- bdev/blockdev.sh@682 -- # dek= 00:32:18.394 12:52:00 -- bdev/blockdev.sh@683 -- # env_ctx= 00:32:18.394 12:52:00 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:32:18.394 12:52:00 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:32:18.394 12:52:00 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:32:18.394 12:52:00 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:32:18.394 12:52:00 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:32:18.394 12:52:00 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=137266 00:32:18.394 12:52:00 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:18.394 12:52:00 -- bdev/blockdev.sh@47 -- # waitforlisten 137266 00:32:18.394 12:52:00 -- common/autotest_common.sh@819 -- # '[' -z 137266 ']' 00:32:18.394 12:52:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.394 12:52:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:18.394 12:52:00 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:32:18.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.394 12:52:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.394 12:52:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:18.394 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:32:18.394 [2024-10-01 12:52:00.814021] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:18.394 [2024-10-01 12:52:00.814175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137266 ] 00:32:18.655 [2024-10-01 12:52:00.973857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.914 [2024-10-01 12:52:01.259412] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:18.914 [2024-10-01 12:52:01.259696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.293 12:52:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:20.293 12:52:02 -- common/autotest_common.sh@852 -- # return 0 00:32:20.293 12:52:02 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:32:20.293 12:52:02 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:32:20.293 12:52:02 -- bdev/blockdev.sh@79 -- # local json 00:32:20.293 12:52:02 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:32:20.293 12:52:02 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:20.293 12:52:02 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:32:20.293 12:52:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.293 12:52:02 -- common/autotest_common.sh@10 -- # set +x 00:32:20.293 12:52:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.293 12:52:02 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:32:20.293 12:52:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.293 12:52:02 -- common/autotest_common.sh@10 -- # set +x 00:32:20.293 12:52:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.293 12:52:02 -- bdev/blockdev.sh@738 -- # cat 00:32:20.293 12:52:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:32:20.293 12:52:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.293 12:52:02 -- common/autotest_common.sh@10 -- # set +x 00:32:20.293 12:52:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.293 12:52:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:32:20.293 12:52:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.293 12:52:02 -- common/autotest_common.sh@10 -- # set +x 00:32:20.293 12:52:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.293 12:52:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:20.293 12:52:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.293 12:52:02 -- common/autotest_common.sh@10 -- # set +x 00:32:20.293 12:52:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.293 12:52:02 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:32:20.293 12:52:02 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:32:20.293 12:52:02 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:32:20.293 12:52:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.293 12:52:02 -- common/autotest_common.sh@10 -- # set +x 00:32:20.293 12:52:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.293 12:52:02 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:32:20.293 12:52:02 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b61844aa-ac52-4690-bca9-c7ebef33df02"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b61844aa-ac52-4690-bca9-c7ebef33df02",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:32:20.293 12:52:02 -- bdev/blockdev.sh@747 -- # jq -r .name 00:32:20.293 12:52:02 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:32:20.293 12:52:02 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:32:20.293 12:52:02 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:32:20.293 12:52:02 -- bdev/blockdev.sh@752 -- # killprocess 137266 00:32:20.293 12:52:02 -- common/autotest_common.sh@926 -- # '[' -z 137266 ']' 00:32:20.293 12:52:02 -- common/autotest_common.sh@930 -- # kill -0 137266 00:32:20.293 12:52:02 -- common/autotest_common.sh@931 -- # uname 00:32:20.293 12:52:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:20.293 12:52:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137266 00:32:20.293 12:52:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:20.293 killing process with pid 137266 00:32:20.293 12:52:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:20.293 12:52:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137266' 00:32:20.293 12:52:02 -- common/autotest_common.sh@945 -- # kill 137266 00:32:20.293 12:52:02 -- common/autotest_common.sh@950 -- # wait 137266 00:32:23.582 12:52:05 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:23.582 12:52:05 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:32:23.582 12:52:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:32:23.582 12:52:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:23.582 12:52:05 -- common/autotest_common.sh@10 -- # set +x 00:32:23.582 ************************************ 00:32:23.582 START TEST bdev_hello_world 00:32:23.582 ************************************ 00:32:23.582 12:52:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:32:23.582 [2024-10-01 12:52:05.647465] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:23.582 [2024-10-01 12:52:05.648102] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137373 ] 00:32:23.582 [2024-10-01 12:52:05.817374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.582 [2024-10-01 12:52:06.082539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.150 [2024-10-01 12:52:06.632081] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:24.150 [2024-10-01 12:52:06.632191] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:32:24.150 [2024-10-01 12:52:06.632245] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:24.150 [2024-10-01 12:52:06.635823] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:24.150 [2024-10-01 12:52:06.636494] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:24.150 [2024-10-01 12:52:06.636552] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:24.150 [2024-10-01 12:52:06.636765] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:24.150 00:32:24.151 [2024-10-01 12:52:06.636825] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:25.526 00:32:25.526 real 0m2.472s 00:32:25.526 user 0m2.040s 00:32:25.526 sys 0m0.333s 00:32:25.526 12:52:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:25.526 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:32:25.526 ************************************ 00:32:25.526 END TEST bdev_hello_world 00:32:25.526 ************************************ 00:32:25.784 12:52:08 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:32:25.784 12:52:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:25.784 12:52:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:25.784 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:32:25.784 ************************************ 00:32:25.784 START TEST bdev_bounds 00:32:25.784 ************************************ 00:32:25.784 12:52:08 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:32:25.784 12:52:08 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:32:25.784 12:52:08 -- bdev/blockdev.sh@288 -- # bdevio_pid=137430 00:32:25.784 12:52:08 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:25.784 Process bdevio pid: 137430 00:32:25.784 12:52:08 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 137430' 00:32:25.784 12:52:08 -- bdev/blockdev.sh@291 -- # waitforlisten 137430 00:32:25.784 12:52:08 -- common/autotest_common.sh@819 -- # '[' -z 137430 ']' 00:32:25.784 12:52:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.784 12:52:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:25.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.784 12:52:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.784 12:52:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:25.784 12:52:08 -- common/autotest_common.sh@10 -- # set +x 00:32:25.784 [2024-10-01 12:52:08.194156] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:25.784 [2024-10-01 12:52:08.194339] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137430 ] 00:32:26.043 [2024-10-01 12:52:08.365235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:26.302 [2024-10-01 12:52:08.650635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.302 [2024-10-01 12:52:08.650769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:26.302 [2024-10-01 12:52:08.650769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.681 12:52:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:27.681 12:52:09 -- common/autotest_common.sh@852 -- # return 0 00:32:27.681 12:52:09 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:27.681 I/O targets: 00:32:27.681 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:32:27.681 00:32:27.681 00:32:27.681 CUnit - A unit testing framework for C - Version 2.1-3 00:32:27.681 http://cunit.sourceforge.net/ 00:32:27.681 00:32:27.681 00:32:27.681 Suite: bdevio tests on: Nvme0n1 00:32:27.681 Test: blockdev write read block ...passed 00:32:27.681 Test: blockdev write zeroes read block ...passed 00:32:27.681 Test: blockdev write zeroes read no split ...passed 00:32:27.681 Test: blockdev write zeroes read split ...passed 00:32:27.681 Test: blockdev write zeroes read split partial ...passed 00:32:27.681 Test: blockdev reset ...[2024-10-01 12:52:09.984373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:32:27.681 [2024-10-01 12:52:09.988581] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:27.681 passed 00:32:27.681 Test: blockdev write read 8 blocks ...passed 00:32:27.681 Test: blockdev write read size > 128k ...passed 00:32:27.681 Test: blockdev write read invalid size ...passed 00:32:27.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:27.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:27.681 Test: blockdev write read max offset ...passed 00:32:27.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:27.681 Test: blockdev writev readv 8 blocks ...passed 00:32:27.681 Test: blockdev writev readv 30 x 1block ...passed 00:32:27.681 Test: blockdev writev readv block ...passed 00:32:27.681 Test: blockdev writev readv size > 128k ...passed 00:32:27.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:27.681 Test: blockdev comparev and writev ...[2024-10-01 12:52:09.997987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x3620d000 len:0x1000 00:32:27.681 [2024-10-01 12:52:09.998166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:32:27.681 passed 00:32:27.681 Test: blockdev nvme passthru rw ...passed 00:32:27.681 Test: blockdev nvme passthru vendor specific ...[2024-10-01 12:52:09.999061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:32:27.681 [2024-10-01 12:52:09.999195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:32:27.681 passed 00:32:27.681 Test: blockdev nvme admin passthru ...passed 00:32:27.681 Test: blockdev copy ...passed 00:32:27.681 00:32:27.681 Run Summary: Type Total Ran Passed Failed Inactive 00:32:27.681 suites 1 1 n/a 0 0 00:32:27.681 tests 23 23 23 0 0 00:32:27.681 asserts 152 152 152 0 n/a 00:32:27.681 00:32:27.681 Elapsed time = 0.251 seconds 00:32:27.681 0 00:32:27.681 12:52:10 -- bdev/blockdev.sh@293 -- # killprocess 137430 00:32:27.681 12:52:10 -- common/autotest_common.sh@926 -- # '[' -z 137430 ']' 00:32:27.681 12:52:10 -- common/autotest_common.sh@930 -- # kill -0 137430 00:32:27.681 12:52:10 -- common/autotest_common.sh@931 -- # uname 00:32:27.681 12:52:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:27.681 12:52:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137430 00:32:27.681 12:52:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:27.682 12:52:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:27.682 12:52:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137430' 00:32:27.682 killing process with pid 137430 00:32:27.682 12:52:10 -- common/autotest_common.sh@945 -- # kill 137430 00:32:27.682 12:52:10 -- common/autotest_common.sh@950 -- # wait 137430 00:32:29.102 12:52:11 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:32:29.102 00:32:29.102 real 0m3.437s 00:32:29.102 user 0m8.310s 00:32:29.102 sys 0m0.523s 00:32:29.102 12:52:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:29.102 ************************************ 00:32:29.102 12:52:11 -- common/autotest_common.sh@10 -- # set +x 00:32:29.102 END TEST bdev_bounds 00:32:29.102 ************************************ 00:32:29.102 12:52:11 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:32:29.102 12:52:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:32:29.102 12:52:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:29.102 12:52:11 -- common/autotest_common.sh@10 -- # set +x 00:32:29.359 ************************************ 00:32:29.359 START TEST bdev_nbd 00:32:29.359 ************************************ 00:32:29.359 12:52:11 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:32:29.359 12:52:11 -- bdev/blockdev.sh@298 -- # uname -s 00:32:29.359 12:52:11 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:32:29.359 12:52:11 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:29.359 12:52:11 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:29.359 12:52:11 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:32:29.359 12:52:11 -- bdev/blockdev.sh@302 -- # local bdev_all 00:32:29.359 12:52:11 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:32:29.359 12:52:11 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:32:29.359 12:52:11 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:29.359 12:52:11 -- bdev/blockdev.sh@309 -- # local nbd_all 00:32:29.359 12:52:11 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:32:29.359 12:52:11 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:32:29.359 12:52:11 -- bdev/blockdev.sh@312 -- # local nbd_list 00:32:29.359 12:52:11 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:32:29.359 12:52:11 -- bdev/blockdev.sh@313 -- # local bdev_list 00:32:29.359 12:52:11 -- bdev/blockdev.sh@316 -- # nbd_pid=137506 00:32:29.359 12:52:11 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:29.359 12:52:11 -- bdev/blockdev.sh@318 -- # waitforlisten 137506 /var/tmp/spdk-nbd.sock 00:32:29.359 12:52:11 -- common/autotest_common.sh@819 -- # '[' -z 137506 ']' 00:32:29.359 12:52:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:29.359 12:52:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:29.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:29.359 12:52:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:29.359 12:52:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:29.359 12:52:11 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:32:29.359 12:52:11 -- common/autotest_common.sh@10 -- # set +x 00:32:29.359 [2024-10-01 12:52:11.715958] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:29.359 [2024-10-01 12:52:11.716529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.359 [2024-10-01 12:52:11.888382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.617 [2024-10-01 12:52:12.114858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.183 12:52:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:30.183 12:52:12 -- common/autotest_common.sh@852 -- # return 0 00:32:30.183 12:52:12 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@24 -- # local i 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:32:30.183 12:52:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:32:30.442 12:52:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:32:30.442 12:52:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:32:30.442 12:52:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:32:30.442 12:52:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:32:30.442 12:52:12 -- common/autotest_common.sh@857 -- # local i 00:32:30.442 12:52:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:32:30.442 12:52:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:32:30.442 12:52:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:32:30.442 12:52:12 -- common/autotest_common.sh@861 -- # break 00:32:30.442 12:52:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:32:30.442 12:52:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:32:30.442 12:52:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:30.442 1+0 records in 00:32:30.442 1+0 records out 00:32:30.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657311 s, 6.2 MB/s 00:32:30.442 12:52:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:30.442 12:52:12 -- common/autotest_common.sh@874 -- # size=4096 00:32:30.442 12:52:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:30.442 12:52:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:32:30.442 12:52:12 -- common/autotest_common.sh@877 -- # return 0 00:32:30.442 12:52:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:30.442 12:52:12 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:32:30.442 12:52:12 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:32:30.701 { 00:32:30.701 "nbd_device": "/dev/nbd0", 00:32:30.701 "bdev_name": "Nvme0n1" 00:32:30.701 } 00:32:30.701 ]' 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@119 -- # echo '[ 00:32:30.701 { 00:32:30.701 "nbd_device": "/dev/nbd0", 00:32:30.701 "bdev_name": "Nvme0n1" 00:32:30.701 } 00:32:30.701 ]' 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@51 -- # local i 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:30.701 12:52:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@41 -- # break 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@45 -- # return 0 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:30.960 12:52:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@65 -- # true 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@65 -- # count=0 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@122 -- # count=0 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@127 -- # return 0 00:32:31.219 12:52:13 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@12 -- # local i 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:31.219 12:52:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:32:31.478 /dev/nbd0 00:32:31.478 12:52:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:31.478 12:52:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:31.478 12:52:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:32:31.478 12:52:13 -- common/autotest_common.sh@857 -- # local i 00:32:31.478 12:52:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:32:31.478 12:52:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:32:31.478 12:52:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:32:31.478 12:52:13 -- common/autotest_common.sh@861 -- # break 00:32:31.478 12:52:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:32:31.478 12:52:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:32:31.478 12:52:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:31.478 1+0 records in 00:32:31.478 1+0 records out 00:32:31.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046301 s, 8.8 MB/s 00:32:31.478 12:52:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:31.478 12:52:13 -- common/autotest_common.sh@874 -- # size=4096 00:32:31.478 12:52:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:31.478 12:52:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:32:31.478 12:52:13 -- common/autotest_common.sh@877 -- # return 0 00:32:31.478 12:52:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:31.478 12:52:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:31.478 12:52:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:31.478 12:52:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:31.478 12:52:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:31.737 { 00:32:31.737 "nbd_device": "/dev/nbd0", 00:32:31.737 "bdev_name": "Nvme0n1" 00:32:31.737 } 00:32:31.737 ]' 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:31.737 { 00:32:31.737 "nbd_device": "/dev/nbd0", 00:32:31.737 "bdev_name": "Nvme0n1" 00:32:31.737 } 00:32:31.737 ]' 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@65 -- # count=1 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@66 -- # echo 1 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@95 -- # count=1 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:32:31.737 256+0 records in 00:32:31.737 256+0 records out 00:32:31.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130905 s, 80.1 MB/s 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:31.737 256+0 records in 00:32:31.737 256+0 records out 00:32:31.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0682611 s, 15.4 MB/s 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@51 -- # local i 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:31.737 12:52:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@41 -- # break 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@45 -- # return 0 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:31.997 12:52:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@65 -- # true 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@65 -- # count=0 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@104 -- # count=0 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@109 -- # return 0 00:32:32.256 12:52:14 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:32.256 12:52:14 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:32.515 malloc_lvol_verify 00:32:32.515 12:52:14 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:32.774 848ff2f6-bb9a-47d0-bef5-4fe0e1696111 00:32:32.774 12:52:15 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:33.032 4d85ecb2-bf3d-4ae9-a080-afcaf3de7bee 00:32:33.032 12:52:15 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:33.032 /dev/nbd0 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:33.292 mke2fs 1.46.5 (30-Dec-2021) 00:32:33.292 00:32:33.292 Filesystem too small for a journal 00:32:33.292 Discarding device blocks: 0/1024 done 00:32:33.292 Creating filesystem with 1024 4k blocks and 1024 inodes 00:32:33.292 00:32:33.292 Allocating group tables: 0/1 done 00:32:33.292 Writing inode tables: 0/1 done 00:32:33.292 Writing superblocks and filesystem accounting information: 0/1 done 00:32:33.292 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@51 -- # local i 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@41 -- # break 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@45 -- # return 0 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:33.292 12:52:15 -- bdev/nbd_common.sh@147 -- # return 0 00:32:33.292 12:52:15 -- bdev/blockdev.sh@324 -- # killprocess 137506 00:32:33.292 12:52:15 -- common/autotest_common.sh@926 -- # '[' -z 137506 ']' 00:32:33.292 12:52:15 -- common/autotest_common.sh@930 -- # kill -0 137506 00:32:33.292 12:52:15 -- common/autotest_common.sh@931 -- # uname 00:32:33.292 12:52:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:33.292 12:52:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137506 00:32:33.292 12:52:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:33.292 killing process with pid 137506 00:32:33.292 12:52:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:33.292 12:52:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137506' 00:32:33.292 12:52:15 -- common/autotest_common.sh@945 -- # kill 137506 00:32:33.292 12:52:15 -- common/autotest_common.sh@950 -- # wait 137506 00:32:34.669 12:52:17 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:32:34.669 00:32:34.669 real 0m5.401s 00:32:34.669 user 0m7.148s 00:32:34.669 sys 0m1.538s 00:32:34.669 12:52:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.669 12:52:17 -- common/autotest_common.sh@10 -- # set +x 00:32:34.669 ************************************ 00:32:34.669 END TEST bdev_nbd 00:32:34.669 ************************************ 00:32:34.669 12:52:17 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:32:34.669 skipping fio tests on NVMe due to multi-ns failures. 00:32:34.669 12:52:17 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:32:34.669 12:52:17 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:32:34.669 12:52:17 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:34.669 12:52:17 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:34.669 12:52:17 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:34.669 12:52:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:34.669 12:52:17 -- common/autotest_common.sh@10 -- # set +x 00:32:34.669 ************************************ 00:32:34.669 START TEST bdev_verify 00:32:34.669 ************************************ 00:32:34.669 12:52:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:34.669 [2024-10-01 12:52:17.202004] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:34.669 [2024-10-01 12:52:17.202148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137695 ] 00:32:34.928 [2024-10-01 12:52:17.384371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:35.187 [2024-10-01 12:52:17.649942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.187 [2024-10-01 12:52:17.649943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.754 Running I/O for 5 seconds... 00:32:41.026 00:32:41.026 Latency(us) 00:32:41.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.026 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:41.026 Verification LBA range: start 0x0 length 0xa0000 00:32:41.026 Nvme0n1 : 5.01 17259.30 67.42 0.00 0.00 7386.83 340.51 19160.73 00:32:41.026 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:41.026 Verification LBA range: start 0xa0000 length 0xa0000 00:32:41.026 Nvme0n1 : 5.01 14718.92 57.50 0.00 0.00 8662.79 287.87 22213.81 00:32:41.026 =================================================================================================================== 00:32:41.026 Total : 31978.22 124.91 0.00 0.00 7974.14 287.87 22213.81 00:32:51.150 00:32:51.150 real 0m15.235s 00:32:51.150 user 0m28.891s 00:32:51.150 sys 0m0.421s 00:32:51.150 12:52:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:51.150 12:52:32 -- common/autotest_common.sh@10 -- # set +x 00:32:51.150 ************************************ 00:32:51.150 END TEST bdev_verify 00:32:51.150 ************************************ 00:32:51.150 12:52:32 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:51.150 12:52:32 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:51.150 12:52:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:51.150 12:52:32 -- common/autotest_common.sh@10 -- # set +x 00:32:51.150 ************************************ 00:32:51.150 START TEST bdev_verify_big_io 00:32:51.150 ************************************ 00:32:51.150 12:52:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:51.150 [2024-10-01 12:52:32.513231] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:51.150 [2024-10-01 12:52:32.513369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137845 ] 00:32:51.150 [2024-10-01 12:52:32.683661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:51.150 [2024-10-01 12:52:32.962676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.150 [2024-10-01 12:52:32.962682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.150 Running I/O for 5 seconds... 00:32:56.424 00:32:56.424 Latency(us) 00:32:56.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.424 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:56.424 Verification LBA range: start 0x0 length 0xa000 00:32:56.424 Nvme0n1 : 5.03 2523.46 157.72 0.00 0.00 50031.01 1190.97 94329.73 00:32:56.424 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:56.424 Verification LBA range: start 0xa000 length 0xa000 00:32:56.424 Nvme0n1 : 5.03 1603.17 100.20 0.00 0.00 78757.53 759.98 103173.14 00:32:56.424 =================================================================================================================== 00:32:56.424 Total : 4126.63 257.91 0.00 0.00 61194.36 759.98 103173.14 00:32:58.356 00:32:58.356 real 0m8.126s 00:32:58.356 user 0m14.797s 00:32:58.356 sys 0m0.272s 00:32:58.356 ************************************ 00:32:58.356 END TEST bdev_verify_big_io 00:32:58.356 ************************************ 00:32:58.356 12:52:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:58.356 12:52:40 -- common/autotest_common.sh@10 -- # set +x 00:32:58.356 12:52:40 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:58.356 12:52:40 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:58.356 12:52:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:58.356 12:52:40 -- common/autotest_common.sh@10 -- # set +x 00:32:58.356 ************************************ 00:32:58.356 START TEST bdev_write_zeroes 00:32:58.356 ************************************ 00:32:58.356 12:52:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:58.356 [2024-10-01 12:52:40.705148] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:58.356 [2024-10-01 12:52:40.705271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137960 ] 00:32:58.356 [2024-10-01 12:52:40.871799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.616 [2024-10-01 12:52:41.081124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.183 Running I/O for 1 seconds... 00:33:00.117 00:33:00.117 Latency(us) 00:33:00.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.117 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:00.117 Nvme0n1 : 1.00 63112.40 246.53 0.00 0.00 2023.77 575.74 10948.99 00:33:00.117 =================================================================================================================== 00:33:00.117 Total : 63112.40 246.53 0.00 0.00 2023.77 575.74 10948.99 00:33:02.018 00:33:02.018 real 0m3.490s 00:33:02.018 user 0m3.149s 00:33:02.018 sys 0m0.241s 00:33:02.018 12:52:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:02.018 12:52:44 -- common/autotest_common.sh@10 -- # set +x 00:33:02.018 ************************************ 00:33:02.018 END TEST bdev_write_zeroes 00:33:02.018 ************************************ 00:33:02.018 12:52:44 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:02.018 12:52:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:02.018 12:52:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:02.018 12:52:44 -- common/autotest_common.sh@10 -- # set +x 00:33:02.018 ************************************ 00:33:02.018 START TEST bdev_json_nonenclosed 00:33:02.018 ************************************ 00:33:02.018 12:52:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:02.018 [2024-10-01 12:52:44.270686] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:02.018 [2024-10-01 12:52:44.270866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138026 ] 00:33:02.018 [2024-10-01 12:52:44.446194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.277 [2024-10-01 12:52:44.731403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.277 [2024-10-01 12:52:44.731648] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:02.277 [2024-10-01 12:52:44.731690] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:02.842 00:33:02.843 real 0m1.081s 00:33:02.843 user 0m0.817s 00:33:02.843 sys 0m0.165s 00:33:02.843 12:52:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:02.843 ************************************ 00:33:02.843 END TEST bdev_json_nonenclosed 00:33:02.843 12:52:45 -- common/autotest_common.sh@10 -- # set +x 00:33:02.843 ************************************ 00:33:02.843 12:52:45 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:02.843 12:52:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:02.843 12:52:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:02.843 12:52:45 -- common/autotest_common.sh@10 -- # set +x 00:33:02.843 ************************************ 00:33:02.843 START TEST bdev_json_nonarray 00:33:02.843 ************************************ 00:33:02.843 12:52:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:03.100 [2024-10-01 12:52:45.419972] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:03.100 [2024-10-01 12:52:45.420102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138065 ] 00:33:03.100 [2024-10-01 12:52:45.587412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.357 [2024-10-01 12:52:45.854973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.357 [2024-10-01 12:52:45.855219] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:03.357 [2024-10-01 12:52:45.855264] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:03.965 00:33:03.965 real 0m1.019s 00:33:03.965 user 0m0.759s 00:33:03.965 sys 0m0.160s 00:33:03.965 12:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.965 ************************************ 00:33:03.965 END TEST bdev_json_nonarray 00:33:03.965 ************************************ 00:33:03.965 12:52:46 -- common/autotest_common.sh@10 -- # set +x 00:33:03.965 12:52:46 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:33:03.965 12:52:46 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:33:03.965 12:52:46 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:33:03.965 12:52:46 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:33:03.965 12:52:46 -- bdev/blockdev.sh@809 -- # cleanup 00:33:03.965 12:52:46 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:33:03.965 12:52:46 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:03.965 12:52:46 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:33:03.965 12:52:46 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:33:03.965 12:52:46 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:33:03.965 12:52:46 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:33:03.965 00:33:03.965 real 0m45.825s 00:33:03.965 user 1m11.150s 00:33:03.965 sys 0m4.805s 00:33:03.965 12:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.965 12:52:46 -- common/autotest_common.sh@10 -- # set +x 00:33:03.965 ************************************ 00:33:03.965 END TEST blockdev_nvme 00:33:03.965 ************************************ 00:33:04.223 12:52:46 -- spdk/autotest.sh@219 -- # uname -s 00:33:04.223 12:52:46 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:33:04.223 12:52:46 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:33:04.223 12:52:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:04.223 12:52:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:04.223 12:52:46 -- common/autotest_common.sh@10 -- # set +x 00:33:04.223 ************************************ 00:33:04.223 START TEST blockdev_nvme_gpt 00:33:04.223 ************************************ 00:33:04.223 12:52:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:33:04.223 * Looking for test storage... 00:33:04.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:33:04.223 12:52:46 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:04.223 12:52:46 -- bdev/nbd_common.sh@6 -- # set -e 00:33:04.223 12:52:46 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:04.223 12:52:46 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:04.223 12:52:46 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:33:04.223 12:52:46 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:33:04.223 12:52:46 -- bdev/blockdev.sh@18 -- # : 00:33:04.223 12:52:46 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:33:04.223 12:52:46 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:33:04.223 12:52:46 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:33:04.223 12:52:46 -- bdev/blockdev.sh@672 -- # uname -s 00:33:04.223 12:52:46 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:33:04.223 12:52:46 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:33:04.223 12:52:46 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:33:04.223 12:52:46 -- bdev/blockdev.sh@681 -- # crypto_device= 00:33:04.223 12:52:46 -- bdev/blockdev.sh@682 -- # dek= 00:33:04.223 12:52:46 -- bdev/blockdev.sh@683 -- # env_ctx= 00:33:04.223 12:52:46 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:33:04.223 12:52:46 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:33:04.223 12:52:46 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:33:04.223 12:52:46 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:33:04.223 12:52:46 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:33:04.223 12:52:46 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=138148 00:33:04.223 12:52:46 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:04.223 12:52:46 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:04.224 12:52:46 -- bdev/blockdev.sh@47 -- # waitforlisten 138148 00:33:04.224 12:52:46 -- common/autotest_common.sh@819 -- # '[' -z 138148 ']' 00:33:04.224 12:52:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.224 12:52:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:04.224 12:52:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.224 12:52:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:04.224 12:52:46 -- common/autotest_common.sh@10 -- # set +x 00:33:04.224 [2024-10-01 12:52:46.751287] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:04.224 [2024-10-01 12:52:46.751553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138148 ] 00:33:04.481 [2024-10-01 12:52:46.947814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.740 [2024-10-01 12:52:47.221255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:04.740 [2024-10-01 12:52:47.221520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.116 12:52:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:06.116 12:52:48 -- common/autotest_common.sh@852 -- # return 0 00:33:06.116 12:52:48 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:33:06.116 12:52:48 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:33:06.116 12:52:48 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:06.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:06.374 Waiting for block devices as requested 00:33:06.374 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:33:06.646 12:52:48 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:33:06.646 12:52:48 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:33:06.646 12:52:48 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:33:06.646 12:52:48 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:33:06.646 12:52:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:33:06.646 12:52:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:33:06.646 12:52:48 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:33:06.646 12:52:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:06.646 12:52:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:33:06.646 12:52:48 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:33:06.646 12:52:48 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:33:06.646 12:52:48 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:33:06.646 12:52:48 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:33:06.646 12:52:48 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:33:06.646 12:52:48 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:33:06.646 12:52:48 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:33:06.646 12:52:48 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:33:06.646 BYT; 00:33:06.646 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:33:06.646 12:52:48 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:33:06.646 BYT; 00:33:06.646 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:33:06.646 12:52:48 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:33:06.646 12:52:48 -- bdev/blockdev.sh@114 -- # break 00:33:06.646 12:52:48 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:33:06.646 12:52:48 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:33:06.646 12:52:48 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:33:06.646 12:52:48 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:33:06.905 12:52:49 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:33:06.905 12:52:49 -- scripts/common.sh@410 -- # local spdk_guid 00:33:06.905 12:52:49 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:33:06.905 12:52:49 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:33:06.905 12:52:49 -- scripts/common.sh@415 -- # IFS='()' 00:33:06.905 12:52:49 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:33:06.905 12:52:49 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:33:06.905 12:52:49 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:33:06.905 12:52:49 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:33:06.905 12:52:49 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:33:06.905 12:52:49 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:33:06.905 12:52:49 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:33:06.905 12:52:49 -- scripts/common.sh@422 -- # local spdk_guid 00:33:06.905 12:52:49 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:33:06.905 12:52:49 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:33:06.905 12:52:49 -- scripts/common.sh@427 -- # IFS='()' 00:33:06.905 12:52:49 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:33:06.905 12:52:49 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:33:06.905 12:52:49 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:33:06.905 12:52:49 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:33:06.905 12:52:49 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:33:06.905 12:52:49 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:33:06.905 12:52:49 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:33:08.320 The operation has completed successfully. 00:33:08.320 12:52:50 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:33:09.256 The operation has completed successfully. 00:33:09.256 12:52:51 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:09.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:09.773 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:33:11.150 12:52:53 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:33:11.151 12:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.151 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:33:11.151 [] 00:33:11.151 12:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.151 12:52:53 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:33:11.151 12:52:53 -- bdev/blockdev.sh@79 -- # local json 00:33:11.151 12:52:53 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:33:11.151 12:52:53 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:11.151 12:52:53 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:33:11.151 12:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.151 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:33:11.410 12:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.410 12:52:53 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:33:11.410 12:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.411 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:33:11.411 12:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.411 12:52:53 -- bdev/blockdev.sh@738 -- # cat 00:33:11.411 12:52:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:33:11.411 12:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.411 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:33:11.411 12:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.411 12:52:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:33:11.411 12:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.411 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:33:11.411 12:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.411 12:52:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:11.411 12:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.411 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:33:11.411 12:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.411 12:52:53 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:33:11.411 12:52:53 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:33:11.411 12:52:53 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:33:11.411 12:52:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.411 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:33:11.411 12:52:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.411 12:52:53 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:33:11.411 12:52:53 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:33:11.411 12:52:53 -- bdev/blockdev.sh@747 -- # jq -r .name 00:33:11.411 12:52:53 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:33:11.411 12:52:53 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:33:11.411 12:52:53 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:33:11.411 12:52:53 -- bdev/blockdev.sh@752 -- # killprocess 138148 00:33:11.411 12:52:53 -- common/autotest_common.sh@926 -- # '[' -z 138148 ']' 00:33:11.411 12:52:53 -- common/autotest_common.sh@930 -- # kill -0 138148 00:33:11.411 12:52:53 -- common/autotest_common.sh@931 -- # uname 00:33:11.411 12:52:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:11.411 12:52:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138148 00:33:11.411 12:52:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:11.411 12:52:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:11.411 killing process with pid 138148 00:33:11.411 12:52:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138148' 00:33:11.411 12:52:53 -- common/autotest_common.sh@945 -- # kill 138148 00:33:11.411 12:52:53 -- common/autotest_common.sh@950 -- # wait 138148 00:33:14.033 12:52:56 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:14.033 12:52:56 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:33:14.033 12:52:56 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:33:14.033 12:52:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:14.033 12:52:56 -- common/autotest_common.sh@10 -- # set +x 00:33:14.033 ************************************ 00:33:14.033 START TEST bdev_hello_world 00:33:14.033 ************************************ 00:33:14.033 12:52:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:33:14.033 [2024-10-01 12:52:56.350483] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:14.033 [2024-10-01 12:52:56.350619] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138603 ] 00:33:14.033 [2024-10-01 12:52:56.519671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.291 [2024-10-01 12:52:56.719283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.855 [2024-10-01 12:52:57.169607] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:14.855 [2024-10-01 12:52:57.169678] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:33:14.855 [2024-10-01 12:52:57.169714] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:14.855 [2024-10-01 12:52:57.172744] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:14.855 [2024-10-01 12:52:57.173293] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:14.855 [2024-10-01 12:52:57.173345] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:14.855 [2024-10-01 12:52:57.173585] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:14.855 00:33:14.855 [2024-10-01 12:52:57.173621] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:16.229 00:33:16.229 real 0m2.183s 00:33:16.229 user 0m1.867s 00:33:16.229 sys 0m0.216s 00:33:16.229 12:52:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:16.229 12:52:58 -- common/autotest_common.sh@10 -- # set +x 00:33:16.229 ************************************ 00:33:16.229 END TEST bdev_hello_world 00:33:16.229 ************************************ 00:33:16.229 12:52:58 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:33:16.229 12:52:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:16.229 12:52:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:16.229 12:52:58 -- common/autotest_common.sh@10 -- # set +x 00:33:16.229 ************************************ 00:33:16.229 START TEST bdev_bounds 00:33:16.229 ************************************ 00:33:16.229 12:52:58 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:33:16.229 12:52:58 -- bdev/blockdev.sh@288 -- # bdevio_pid=138653 00:33:16.229 12:52:58 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:16.229 12:52:58 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:16.229 Process bdevio pid: 138653 00:33:16.229 12:52:58 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 138653' 00:33:16.229 12:52:58 -- bdev/blockdev.sh@291 -- # waitforlisten 138653 00:33:16.229 12:52:58 -- common/autotest_common.sh@819 -- # '[' -z 138653 ']' 00:33:16.229 12:52:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.229 12:52:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:16.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.229 12:52:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.229 12:52:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:16.229 12:52:58 -- common/autotest_common.sh@10 -- # set +x 00:33:16.229 [2024-10-01 12:52:58.612641] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:16.229 [2024-10-01 12:52:58.612783] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138653 ] 00:33:16.487 [2024-10-01 12:52:58.787294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:16.744 [2024-10-01 12:52:59.066450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.744 [2024-10-01 12:52:59.066652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.744 [2024-10-01 12:52:59.066651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:17.774 12:53:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:17.774 12:53:00 -- common/autotest_common.sh@852 -- # return 0 00:33:17.774 12:53:00 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:17.774 I/O targets: 00:33:17.774 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:33:17.774 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:33:17.774 00:33:17.774 00:33:17.774 CUnit - A unit testing framework for C - Version 2.1-3 00:33:17.774 http://cunit.sourceforge.net/ 00:33:17.774 00:33:17.774 00:33:17.774 Suite: bdevio tests on: Nvme0n1p2 00:33:17.774 Test: blockdev write read block ...passed 00:33:17.774 Test: blockdev write zeroes read block ...passed 00:33:17.774 Test: blockdev write zeroes read no split ...passed 00:33:17.774 Test: blockdev write zeroes read split ...passed 00:33:18.032 Test: blockdev write zeroes read split partial ...passed 00:33:18.032 Test: blockdev reset ...[2024-10-01 12:53:00.337098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:33:18.032 [2024-10-01 12:53:00.341391] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:18.032 passed 00:33:18.032 Test: blockdev write read 8 blocks ...passed 00:33:18.032 Test: blockdev write read size > 128k ...passed 00:33:18.032 Test: blockdev write read invalid size ...passed 00:33:18.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:18.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:18.032 Test: blockdev write read max offset ...passed 00:33:18.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:18.032 Test: blockdev writev readv 8 blocks ...passed 00:33:18.032 Test: blockdev writev readv 30 x 1block ...passed 00:33:18.032 Test: blockdev writev readv block ...passed 00:33:18.032 Test: blockdev writev readv size > 128k ...passed 00:33:18.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:18.032 Test: blockdev comparev and writev ...[2024-10-01 12:53:00.352471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x27a0b000 len:0x1000 00:33:18.032 [2024-10-01 12:53:00.352676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:33:18.032 passed 00:33:18.032 Test: blockdev nvme passthru rw ...passed 00:33:18.032 Test: blockdev nvme passthru vendor specific ...passed 00:33:18.032 Test: blockdev nvme admin passthru ...passed 00:33:18.032 Test: blockdev copy ...passed 00:33:18.032 Suite: bdevio tests on: Nvme0n1p1 00:33:18.032 Test: blockdev write read block ...passed 00:33:18.032 Test: blockdev write zeroes read block ...passed 00:33:18.032 Test: blockdev write zeroes read no split ...passed 00:33:18.032 Test: blockdev write zeroes read split ...passed 00:33:18.032 Test: blockdev write zeroes read split partial ...passed 00:33:18.032 Test: blockdev reset ...[2024-10-01 12:53:00.431016] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:33:18.032 [2024-10-01 12:53:00.435087] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:18.032 passed 00:33:18.032 Test: blockdev write read 8 blocks ...passed 00:33:18.032 Test: blockdev write read size > 128k ...passed 00:33:18.032 Test: blockdev write read invalid size ...passed 00:33:18.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:18.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:18.032 Test: blockdev write read max offset ...passed 00:33:18.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:18.032 Test: blockdev writev readv 8 blocks ...passed 00:33:18.032 Test: blockdev writev readv 30 x 1block ...passed 00:33:18.032 Test: blockdev writev readv block ...passed 00:33:18.032 Test: blockdev writev readv size > 128k ...passed 00:33:18.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:18.032 Test: blockdev comparev and writev ...[2024-10-01 12:53:00.445693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x27a0d000 len:0x1000 00:33:18.032 [2024-10-01 12:53:00.445918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:33:18.032 passed 00:33:18.032 Test: blockdev nvme passthru rw ...passed 00:33:18.032 Test: blockdev nvme passthru vendor specific ...passed 00:33:18.032 Test: blockdev nvme admin passthru ...passed 00:33:18.032 Test: blockdev copy ...passed 00:33:18.032 00:33:18.032 Run Summary: Type Total Ran Passed Failed Inactive 00:33:18.032 suites 2 2 n/a 0 0 00:33:18.032 tests 46 46 46 0 0 00:33:18.032 asserts 284 284 284 0 n/a 00:33:18.032 00:33:18.032 Elapsed time = 0.501 seconds 00:33:18.032 0 00:33:18.032 12:53:00 -- bdev/blockdev.sh@293 -- # killprocess 138653 00:33:18.032 12:53:00 -- common/autotest_common.sh@926 -- # '[' -z 138653 ']' 00:33:18.032 12:53:00 -- common/autotest_common.sh@930 -- # kill -0 138653 00:33:18.032 12:53:00 -- common/autotest_common.sh@931 -- # uname 00:33:18.032 12:53:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:18.032 12:53:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138653 00:33:18.033 12:53:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:18.033 12:53:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:18.033 12:53:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138653' 00:33:18.033 killing process with pid 138653 00:33:18.033 12:53:00 -- common/autotest_common.sh@945 -- # kill 138653 00:33:18.033 12:53:00 -- common/autotest_common.sh@950 -- # wait 138653 00:33:19.935 12:53:01 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:33:19.935 00:33:19.935 real 0m3.450s 00:33:19.935 user 0m8.454s 00:33:19.935 sys 0m0.392s 00:33:19.935 12:53:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:19.935 12:53:01 -- common/autotest_common.sh@10 -- # set +x 00:33:19.935 ************************************ 00:33:19.935 END TEST bdev_bounds 00:33:19.935 ************************************ 00:33:19.935 12:53:02 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:33:19.935 12:53:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:33:19.935 12:53:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:19.935 12:53:02 -- common/autotest_common.sh@10 -- # set +x 00:33:19.935 ************************************ 00:33:19.935 START TEST bdev_nbd 00:33:19.935 ************************************ 00:33:19.935 12:53:02 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:33:19.935 12:53:02 -- bdev/blockdev.sh@298 -- # uname -s 00:33:19.935 12:53:02 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:33:19.935 12:53:02 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:19.935 12:53:02 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:19.935 12:53:02 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:33:19.935 12:53:02 -- bdev/blockdev.sh@302 -- # local bdev_all 00:33:19.935 12:53:02 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:33:19.935 12:53:02 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:33:19.935 12:53:02 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:19.935 12:53:02 -- bdev/blockdev.sh@309 -- # local nbd_all 00:33:19.935 12:53:02 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:33:19.935 12:53:02 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:19.935 12:53:02 -- bdev/blockdev.sh@312 -- # local nbd_list 00:33:19.935 12:53:02 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:33:19.935 12:53:02 -- bdev/blockdev.sh@313 -- # local bdev_list 00:33:19.935 12:53:02 -- bdev/blockdev.sh@316 -- # nbd_pid=138737 00:33:19.935 12:53:02 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:19.935 12:53:02 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:19.935 12:53:02 -- bdev/blockdev.sh@318 -- # waitforlisten 138737 /var/tmp/spdk-nbd.sock 00:33:19.935 12:53:02 -- common/autotest_common.sh@819 -- # '[' -z 138737 ']' 00:33:19.935 12:53:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:19.935 12:53:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:19.935 12:53:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:19.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:19.935 12:53:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:19.935 12:53:02 -- common/autotest_common.sh@10 -- # set +x 00:33:19.935 [2024-10-01 12:53:02.152001] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:19.935 [2024-10-01 12:53:02.152789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.935 [2024-10-01 12:53:02.321559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.193 [2024-10-01 12:53:02.554373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.760 12:53:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:20.760 12:53:03 -- common/autotest_common.sh@852 -- # return 0 00:33:20.760 12:53:03 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@24 -- # local i 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:33:20.760 12:53:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:33:21.019 12:53:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:21.019 12:53:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:21.019 12:53:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:21.019 12:53:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:33:21.019 12:53:03 -- common/autotest_common.sh@857 -- # local i 00:33:21.019 12:53:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:33:21.019 12:53:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:33:21.019 12:53:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:33:21.019 12:53:03 -- common/autotest_common.sh@861 -- # break 00:33:21.019 12:53:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:33:21.019 12:53:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:33:21.019 12:53:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:21.019 1+0 records in 00:33:21.019 1+0 records out 00:33:21.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00090963 s, 4.5 MB/s 00:33:21.019 12:53:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:21.019 12:53:03 -- common/autotest_common.sh@874 -- # size=4096 00:33:21.019 12:53:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:21.019 12:53:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:33:21.019 12:53:03 -- common/autotest_common.sh@877 -- # return 0 00:33:21.019 12:53:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:21.019 12:53:03 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:33:21.019 12:53:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:33:21.277 12:53:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:33:21.277 12:53:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:33:21.277 12:53:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:33:21.277 12:53:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:33:21.277 12:53:03 -- common/autotest_common.sh@857 -- # local i 00:33:21.277 12:53:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:33:21.277 12:53:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:33:21.277 12:53:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:33:21.277 12:53:03 -- common/autotest_common.sh@861 -- # break 00:33:21.277 12:53:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:33:21.277 12:53:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:33:21.277 12:53:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:21.277 1+0 records in 00:33:21.277 1+0 records out 00:33:21.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613015 s, 6.7 MB/s 00:33:21.277 12:53:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:21.277 12:53:03 -- common/autotest_common.sh@874 -- # size=4096 00:33:21.277 12:53:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:21.277 12:53:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:33:21.277 12:53:03 -- common/autotest_common.sh@877 -- # return 0 00:33:21.277 12:53:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:21.277 12:53:03 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:33:21.277 12:53:03 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:21.277 12:53:03 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:21.277 { 00:33:21.277 "nbd_device": "/dev/nbd0", 00:33:21.277 "bdev_name": "Nvme0n1p1" 00:33:21.277 }, 00:33:21.277 { 00:33:21.277 "nbd_device": "/dev/nbd1", 00:33:21.277 "bdev_name": "Nvme0n1p2" 00:33:21.277 } 00:33:21.277 ]' 00:33:21.277 12:53:03 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:21.537 12:53:03 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:21.537 12:53:03 -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:21.537 { 00:33:21.537 "nbd_device": "/dev/nbd0", 00:33:21.537 "bdev_name": "Nvme0n1p1" 00:33:21.537 }, 00:33:21.537 { 00:33:21.537 "nbd_device": "/dev/nbd1", 00:33:21.537 "bdev_name": "Nvme0n1p2" 00:33:21.537 } 00:33:21.537 ]' 00:33:21.537 12:53:03 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:21.537 12:53:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:21.537 12:53:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:21.537 12:53:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:21.537 12:53:03 -- bdev/nbd_common.sh@51 -- # local i 00:33:21.537 12:53:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:21.537 12:53:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:21.537 12:53:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:21.537 12:53:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:21.537 12:53:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:21.537 12:53:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:21.537 12:53:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:21.537 12:53:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:21.537 12:53:04 -- bdev/nbd_common.sh@41 -- # break 00:33:21.537 12:53:04 -- bdev/nbd_common.sh@45 -- # return 0 00:33:21.537 12:53:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:21.537 12:53:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@41 -- # break 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@45 -- # return 0 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:21.796 12:53:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@65 -- # true 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@65 -- # count=0 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@122 -- # count=0 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@127 -- # return 0 00:33:22.054 12:53:04 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@12 -- # local i 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:22.054 12:53:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:33:22.312 /dev/nbd0 00:33:22.312 12:53:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:22.312 12:53:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:22.312 12:53:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:33:22.312 12:53:04 -- common/autotest_common.sh@857 -- # local i 00:33:22.312 12:53:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:33:22.312 12:53:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:33:22.312 12:53:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:33:22.312 12:53:04 -- common/autotest_common.sh@861 -- # break 00:33:22.312 12:53:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:33:22.312 12:53:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:33:22.312 12:53:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:22.312 1+0 records in 00:33:22.312 1+0 records out 00:33:22.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508806 s, 8.1 MB/s 00:33:22.312 12:53:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:22.312 12:53:04 -- common/autotest_common.sh@874 -- # size=4096 00:33:22.312 12:53:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:22.312 12:53:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:33:22.312 12:53:04 -- common/autotest_common.sh@877 -- # return 0 00:33:22.312 12:53:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:22.312 12:53:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:22.312 12:53:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:33:22.570 /dev/nbd1 00:33:22.570 12:53:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:22.570 12:53:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:22.570 12:53:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:33:22.570 12:53:05 -- common/autotest_common.sh@857 -- # local i 00:33:22.570 12:53:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:33:22.570 12:53:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:33:22.570 12:53:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:33:22.570 12:53:05 -- common/autotest_common.sh@861 -- # break 00:33:22.570 12:53:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:33:22.570 12:53:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:33:22.570 12:53:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:22.570 1+0 records in 00:33:22.570 1+0 records out 00:33:22.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113607 s, 3.6 MB/s 00:33:22.570 12:53:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:22.570 12:53:05 -- common/autotest_common.sh@874 -- # size=4096 00:33:22.570 12:53:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:22.841 12:53:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:33:22.841 12:53:05 -- common/autotest_common.sh@877 -- # return 0 00:33:22.841 12:53:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:22.842 12:53:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:22.842 12:53:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:22.842 12:53:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:22.842 12:53:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:22.842 12:53:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:22.842 { 00:33:22.842 "nbd_device": "/dev/nbd0", 00:33:22.842 "bdev_name": "Nvme0n1p1" 00:33:22.842 }, 00:33:22.842 { 00:33:22.842 "nbd_device": "/dev/nbd1", 00:33:22.842 "bdev_name": "Nvme0n1p2" 00:33:22.842 } 00:33:22.842 ]' 00:33:22.842 12:53:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:22.842 { 00:33:22.842 "nbd_device": "/dev/nbd0", 00:33:22.842 "bdev_name": "Nvme0n1p1" 00:33:22.842 }, 00:33:22.842 { 00:33:22.842 "nbd_device": "/dev/nbd1", 00:33:22.842 "bdev_name": "Nvme0n1p2" 00:33:22.842 } 00:33:22.842 ]' 00:33:22.842 12:53:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:23.128 /dev/nbd1' 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:23.128 /dev/nbd1' 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@65 -- # count=2 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@95 -- # count=2 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:23.128 256+0 records in 00:33:23.128 256+0 records out 00:33:23.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135985 s, 77.1 MB/s 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:23.128 256+0 records in 00:33:23.128 256+0 records out 00:33:23.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0811778 s, 12.9 MB/s 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:23.128 256+0 records in 00:33:23.128 256+0 records out 00:33:23.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0885107 s, 11.8 MB/s 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@51 -- # local i 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:23.128 12:53:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:23.388 12:53:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:23.388 12:53:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:23.388 12:53:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:23.388 12:53:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:23.388 12:53:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:23.388 12:53:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:23.388 12:53:05 -- bdev/nbd_common.sh@41 -- # break 00:33:23.388 12:53:05 -- bdev/nbd_common.sh@45 -- # return 0 00:33:23.388 12:53:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:23.388 12:53:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@41 -- # break 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@45 -- # return 0 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.647 12:53:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@65 -- # true 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@65 -- # count=0 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@104 -- # count=0 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@109 -- # return 0 00:33:23.906 12:53:06 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:23.906 12:53:06 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:24.165 malloc_lvol_verify 00:33:24.165 12:53:06 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:24.424 50b00ddf-78d4-4a2c-b3c3-3cbab73e03d1 00:33:24.424 12:53:06 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:24.424 cb4fa419-6421-4875-898d-5cc48bbf0bfd 00:33:24.424 12:53:06 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:24.684 /dev/nbd0 00:33:24.684 12:53:07 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:24.684 mke2fs 1.46.5 (30-Dec-2021) 00:33:24.684 00:33:24.684 Filesystem too small for a journal 00:33:24.684 Discarding device blocks: 0/1024 done 00:33:24.684 Creating filesystem with 1024 4k blocks and 1024 inodes 00:33:24.684 00:33:24.684 Allocating group tables: 0/1 done 00:33:24.684 Writing inode tables: 0/1 done 00:33:24.684 Writing superblocks and filesystem accounting information: 0/1 done 00:33:24.684 00:33:24.684 12:53:07 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:24.684 12:53:07 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:24.684 12:53:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:24.684 12:53:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:24.684 12:53:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:24.684 12:53:07 -- bdev/nbd_common.sh@51 -- # local i 00:33:24.684 12:53:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:24.684 12:53:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:24.943 12:53:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:24.943 12:53:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:24.943 12:53:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:24.943 12:53:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:24.943 12:53:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:24.943 12:53:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:24.943 12:53:07 -- bdev/nbd_common.sh@41 -- # break 00:33:24.943 12:53:07 -- bdev/nbd_common.sh@45 -- # return 0 00:33:24.943 12:53:07 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:24.943 12:53:07 -- bdev/nbd_common.sh@147 -- # return 0 00:33:24.943 12:53:07 -- bdev/blockdev.sh@324 -- # killprocess 138737 00:33:24.943 12:53:07 -- common/autotest_common.sh@926 -- # '[' -z 138737 ']' 00:33:24.943 12:53:07 -- common/autotest_common.sh@930 -- # kill -0 138737 00:33:24.943 12:53:07 -- common/autotest_common.sh@931 -- # uname 00:33:24.943 12:53:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:24.943 12:53:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138737 00:33:24.943 12:53:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:24.943 12:53:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:24.943 12:53:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138737' 00:33:24.943 killing process with pid 138737 00:33:24.943 12:53:07 -- common/autotest_common.sh@945 -- # kill 138737 00:33:24.943 12:53:07 -- common/autotest_common.sh@950 -- # wait 138737 00:33:26.323 ************************************ 00:33:26.323 END TEST bdev_nbd 00:33:26.323 ************************************ 00:33:26.323 12:53:08 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:33:26.323 00:33:26.323 real 0m6.731s 00:33:26.323 user 0m8.975s 00:33:26.323 sys 0m2.037s 00:33:26.323 12:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.323 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:33:26.582 12:53:08 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:33:26.582 12:53:08 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:33:26.582 12:53:08 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:33:26.582 skipping fio tests on NVMe due to multi-ns failures. 00:33:26.582 12:53:08 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:33:26.582 12:53:08 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:26.582 12:53:08 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:26.582 12:53:08 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:33:26.582 12:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:26.582 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:33:26.582 ************************************ 00:33:26.582 START TEST bdev_verify 00:33:26.582 ************************************ 00:33:26.582 12:53:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:26.582 [2024-10-01 12:53:08.957572] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:26.582 [2024-10-01 12:53:08.958381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138982 ] 00:33:26.840 [2024-10-01 12:53:09.125383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:27.134 [2024-10-01 12:53:09.411709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.134 [2024-10-01 12:53:09.411713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.702 Running I/O for 5 seconds... 00:33:33.011 00:33:33.011 Latency(us) 00:33:33.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.011 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:33.011 Verification LBA range: start 0x0 length 0x4ff80 00:33:33.011 Nvme0n1p1 : 5.02 6520.33 25.47 0.00 0.00 19584.67 2684.61 30530.83 00:33:33.011 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:33.011 Verification LBA range: start 0x4ff80 length 0x4ff80 00:33:33.011 Nvme0n1p1 : 5.02 4411.22 17.23 0.00 0.00 28951.98 1006.73 30741.38 00:33:33.011 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:33.011 Verification LBA range: start 0x0 length 0x4ff7f 00:33:33.011 Nvme0n1p2 : 5.02 6518.10 25.46 0.00 0.00 19572.14 2092.41 30320.27 00:33:33.011 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:33.011 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:33:33.011 Nvme0n1p2 : 5.02 4410.29 17.23 0.00 0.00 28920.83 1329.14 30109.71 00:33:33.011 =================================================================================================================== 00:33:33.011 Total : 21859.94 85.39 0.00 0.00 23356.44 1006.73 30741.38 00:33:35.543 00:33:35.543 real 0m9.128s 00:33:35.543 user 0m16.737s 00:33:35.543 sys 0m0.367s 00:33:35.543 12:53:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:35.543 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:33:35.543 ************************************ 00:33:35.543 END TEST bdev_verify 00:33:35.543 ************************************ 00:33:35.801 12:53:18 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:35.801 12:53:18 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:33:35.801 12:53:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:35.801 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:33:35.801 ************************************ 00:33:35.801 START TEST bdev_verify_big_io 00:33:35.801 ************************************ 00:33:35.801 12:53:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:35.801 [2024-10-01 12:53:18.181840] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:35.801 [2024-10-01 12:53:18.182014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139109 ] 00:33:36.060 [2024-10-01 12:53:18.358346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:36.319 [2024-10-01 12:53:18.624632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.319 [2024-10-01 12:53:18.624635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.886 Running I/O for 5 seconds... 00:33:42.202 00:33:42.202 Latency(us) 00:33:42.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.202 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:42.202 Verification LBA range: start 0x0 length 0x4ff8 00:33:42.202 Nvme0n1p1 : 5.07 1387.49 86.72 0.00 0.00 91512.43 2710.93 128861.15 00:33:42.202 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:42.202 Verification LBA range: start 0x4ff8 length 0x4ff8 00:33:42.202 Nvme0n1p1 : 5.09 747.20 46.70 0.00 0.00 168621.18 8317.02 264460.13 00:33:42.202 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:42.202 Verification LBA range: start 0x0 length 0x4ff7 00:33:42.202 Nvme0n1p2 : 5.07 1386.77 86.67 0.00 0.00 90835.11 4000.59 99383.11 00:33:42.202 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:42.202 Verification LBA range: start 0x4ff7 length 0x4ff7 00:33:42.202 Nvme0n1p2 : 5.11 784.54 49.03 0.00 0.00 158068.64 634.96 186132.77 00:33:42.202 =================================================================================================================== 00:33:42.202 Total : 4305.99 269.12 0.00 0.00 116897.57 634.96 264460.13 00:33:44.100 00:33:44.100 real 0m8.165s 00:33:44.100 user 0m14.841s 00:33:44.100 sys 0m0.354s 00:33:44.100 12:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:44.100 ************************************ 00:33:44.100 END TEST bdev_verify_big_io 00:33:44.100 12:53:26 -- common/autotest_common.sh@10 -- # set +x 00:33:44.100 ************************************ 00:33:44.100 12:53:26 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:44.100 12:53:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:44.100 12:53:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:44.100 12:53:26 -- common/autotest_common.sh@10 -- # set +x 00:33:44.100 ************************************ 00:33:44.100 START TEST bdev_write_zeroes 00:33:44.100 ************************************ 00:33:44.100 12:53:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:44.100 [2024-10-01 12:53:26.424377] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:44.100 [2024-10-01 12:53:26.424537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139224 ] 00:33:44.100 [2024-10-01 12:53:26.597517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.360 [2024-10-01 12:53:26.852288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.928 Running I/O for 1 seconds... 00:33:46.302 00:33:46.302 Latency(us) 00:33:46.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.302 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:46.302 Nvme0n1p1 : 1.00 27271.98 106.53 0.00 0.00 4685.26 2105.57 22424.37 00:33:46.302 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:46.302 Nvme0n1p2 : 1.00 27313.95 106.70 0.00 0.00 4672.21 2131.89 22319.09 00:33:46.302 =================================================================================================================== 00:33:46.302 Total : 54585.92 213.23 0.00 0.00 4678.73 2105.57 22424.37 00:33:47.234 00:33:47.234 real 0m3.400s 00:33:47.234 user 0m2.990s 00:33:47.234 sys 0m0.313s 00:33:47.234 12:53:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:47.234 ************************************ 00:33:47.234 END TEST bdev_write_zeroes 00:33:47.234 ************************************ 00:33:47.234 12:53:29 -- common/autotest_common.sh@10 -- # set +x 00:33:47.492 12:53:29 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:47.492 12:53:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:47.492 12:53:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:47.492 12:53:29 -- common/autotest_common.sh@10 -- # set +x 00:33:47.492 ************************************ 00:33:47.492 START TEST bdev_json_nonenclosed 00:33:47.492 ************************************ 00:33:47.492 12:53:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:47.492 [2024-10-01 12:53:29.903604] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:47.492 [2024-10-01 12:53:29.903759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139286 ] 00:33:47.750 [2024-10-01 12:53:30.060759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.008 [2024-10-01 12:53:30.320501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.008 [2024-10-01 12:53:30.320716] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:48.008 [2024-10-01 12:53:30.320754] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:48.582 00:33:48.582 real 0m0.991s 00:33:48.582 user 0m0.755s 00:33:48.582 sys 0m0.137s 00:33:48.582 12:53:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:48.582 12:53:30 -- common/autotest_common.sh@10 -- # set +x 00:33:48.582 ************************************ 00:33:48.582 END TEST bdev_json_nonenclosed 00:33:48.582 ************************************ 00:33:48.582 12:53:30 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:48.582 12:53:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:48.582 12:53:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:48.582 12:53:30 -- common/autotest_common.sh@10 -- # set +x 00:33:48.582 ************************************ 00:33:48.582 START TEST bdev_json_nonarray 00:33:48.582 ************************************ 00:33:48.582 12:53:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:48.582 [2024-10-01 12:53:30.973170] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:48.582 [2024-10-01 12:53:30.973330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139325 ] 00:33:48.862 [2024-10-01 12:53:31.142027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.126 [2024-10-01 12:53:31.393437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.126 [2024-10-01 12:53:31.393662] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:49.126 [2024-10-01 12:53:31.393702] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:49.385 00:33:49.385 real 0m1.006s 00:33:49.385 user 0m0.745s 00:33:49.385 sys 0m0.162s 00:33:49.385 12:53:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:49.385 12:53:31 -- common/autotest_common.sh@10 -- # set +x 00:33:49.385 ************************************ 00:33:49.385 END TEST bdev_json_nonarray 00:33:49.385 ************************************ 00:33:49.644 12:53:31 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:33:49.644 12:53:31 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:33:49.644 12:53:31 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:33:49.644 12:53:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:49.644 12:53:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:49.644 12:53:31 -- common/autotest_common.sh@10 -- # set +x 00:33:49.644 ************************************ 00:33:49.644 START TEST bdev_gpt_uuid 00:33:49.644 ************************************ 00:33:49.644 12:53:31 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:33:49.644 12:53:31 -- bdev/blockdev.sh@612 -- # local bdev 00:33:49.644 12:53:31 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:33:49.644 12:53:31 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=139363 00:33:49.644 12:53:31 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:49.644 12:53:31 -- bdev/blockdev.sh@47 -- # waitforlisten 139363 00:33:49.644 12:53:31 -- common/autotest_common.sh@819 -- # '[' -z 139363 ']' 00:33:49.644 12:53:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.644 12:53:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:49.644 12:53:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.644 12:53:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:49.644 12:53:31 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:49.644 12:53:31 -- common/autotest_common.sh@10 -- # set +x 00:33:49.644 [2024-10-01 12:53:32.077026] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:49.644 [2024-10-01 12:53:32.077960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139363 ] 00:33:49.902 [2024-10-01 12:53:32.249201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.160 [2024-10-01 12:53:32.485057] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:50.160 [2024-10-01 12:53:32.485277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.536 12:53:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:51.536 12:53:33 -- common/autotest_common.sh@852 -- # return 0 00:33:51.536 12:53:33 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:51.536 12:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.536 12:53:33 -- common/autotest_common.sh@10 -- # set +x 00:33:51.536 Some configs were skipped because the RPC state that can call them passed over. 00:33:51.536 12:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.536 12:53:33 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:33:51.536 12:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.536 12:53:33 -- common/autotest_common.sh@10 -- # set +x 00:33:51.536 12:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.536 12:53:33 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:33:51.536 12:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.536 12:53:33 -- common/autotest_common.sh@10 -- # set +x 00:33:51.536 12:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.536 12:53:33 -- bdev/blockdev.sh@619 -- # bdev='[ 00:33:51.536 { 00:33:51.536 "name": "Nvme0n1p1", 00:33:51.536 "aliases": [ 00:33:51.536 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:33:51.536 ], 00:33:51.536 "product_name": "GPT Disk", 00:33:51.536 "block_size": 4096, 00:33:51.536 "num_blocks": 655104, 00:33:51.536 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:33:51.536 "assigned_rate_limits": { 00:33:51.536 "rw_ios_per_sec": 0, 00:33:51.536 "rw_mbytes_per_sec": 0, 00:33:51.536 "r_mbytes_per_sec": 0, 00:33:51.536 "w_mbytes_per_sec": 0 00:33:51.536 }, 00:33:51.536 "claimed": false, 00:33:51.536 "zoned": false, 00:33:51.536 "supported_io_types": { 00:33:51.536 "read": true, 00:33:51.536 "write": true, 00:33:51.536 "unmap": true, 00:33:51.536 "write_zeroes": true, 00:33:51.536 "flush": true, 00:33:51.536 "reset": true, 00:33:51.536 "compare": true, 00:33:51.536 "compare_and_write": false, 00:33:51.536 "abort": true, 00:33:51.536 "nvme_admin": false, 00:33:51.536 "nvme_io": false 00:33:51.536 }, 00:33:51.536 "driver_specific": { 00:33:51.536 "gpt": { 00:33:51.536 "base_bdev": "Nvme0n1", 00:33:51.536 "offset_blocks": 256, 00:33:51.536 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:33:51.536 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:33:51.536 "partition_name": "SPDK_TEST_first" 00:33:51.536 } 00:33:51.536 } 00:33:51.536 } 00:33:51.536 ]' 00:33:51.536 12:53:33 -- bdev/blockdev.sh@620 -- # jq -r length 00:33:51.536 12:53:33 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:33:51.536 12:53:33 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:33:51.536 12:53:33 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:33:51.536 12:53:33 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:33:51.536 12:53:33 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:33:51.536 12:53:33 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:33:51.536 12:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:51.536 12:53:33 -- common/autotest_common.sh@10 -- # set +x 00:33:51.536 12:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:51.536 12:53:33 -- bdev/blockdev.sh@624 -- # bdev='[ 00:33:51.536 { 00:33:51.536 "name": "Nvme0n1p2", 00:33:51.536 "aliases": [ 00:33:51.536 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:33:51.536 ], 00:33:51.536 "product_name": "GPT Disk", 00:33:51.536 "block_size": 4096, 00:33:51.536 "num_blocks": 655103, 00:33:51.536 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:33:51.536 "assigned_rate_limits": { 00:33:51.536 "rw_ios_per_sec": 0, 00:33:51.536 "rw_mbytes_per_sec": 0, 00:33:51.536 "r_mbytes_per_sec": 0, 00:33:51.536 "w_mbytes_per_sec": 0 00:33:51.536 }, 00:33:51.536 "claimed": false, 00:33:51.536 "zoned": false, 00:33:51.536 "supported_io_types": { 00:33:51.536 "read": true, 00:33:51.536 "write": true, 00:33:51.536 "unmap": true, 00:33:51.536 "write_zeroes": true, 00:33:51.536 "flush": true, 00:33:51.536 "reset": true, 00:33:51.536 "compare": true, 00:33:51.536 "compare_and_write": false, 00:33:51.536 "abort": true, 00:33:51.536 "nvme_admin": false, 00:33:51.536 "nvme_io": false 00:33:51.536 }, 00:33:51.536 "driver_specific": { 00:33:51.536 "gpt": { 00:33:51.536 "base_bdev": "Nvme0n1", 00:33:51.536 "offset_blocks": 655360, 00:33:51.536 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:33:51.536 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:33:51.536 "partition_name": "SPDK_TEST_second" 00:33:51.536 } 00:33:51.536 } 00:33:51.536 } 00:33:51.536 ]' 00:33:51.536 12:53:33 -- bdev/blockdev.sh@625 -- # jq -r length 00:33:51.536 12:53:33 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:33:51.536 12:53:33 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:33:51.536 12:53:34 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:33:51.536 12:53:34 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:33:51.794 12:53:34 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:33:51.794 12:53:34 -- bdev/blockdev.sh@629 -- # killprocess 139363 00:33:51.794 12:53:34 -- common/autotest_common.sh@926 -- # '[' -z 139363 ']' 00:33:51.794 12:53:34 -- common/autotest_common.sh@930 -- # kill -0 139363 00:33:51.794 12:53:34 -- common/autotest_common.sh@931 -- # uname 00:33:51.794 12:53:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:51.794 12:53:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139363 00:33:51.794 12:53:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:51.794 killing process with pid 139363 00:33:51.794 12:53:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:51.794 12:53:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139363' 00:33:51.794 12:53:34 -- common/autotest_common.sh@945 -- # kill 139363 00:33:51.794 12:53:34 -- common/autotest_common.sh@950 -- # wait 139363 00:33:54.329 00:33:54.329 real 0m4.772s 00:33:54.329 user 0m4.970s 00:33:54.329 sys 0m0.658s 00:33:54.329 12:53:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:54.329 12:53:36 -- common/autotest_common.sh@10 -- # set +x 00:33:54.329 ************************************ 00:33:54.329 END TEST bdev_gpt_uuid 00:33:54.329 ************************************ 00:33:54.329 12:53:36 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:33:54.329 12:53:36 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:33:54.329 12:53:36 -- bdev/blockdev.sh@809 -- # cleanup 00:33:54.329 12:53:36 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:33:54.329 12:53:36 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:54.329 12:53:36 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:33:54.329 12:53:36 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:33:54.329 12:53:36 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:33:54.329 12:53:36 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:54.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:54.894 Waiting for block devices as requested 00:33:55.152 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:33:55.153 12:53:37 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:33:55.153 12:53:37 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:33:55.153 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:33:55.153 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:33:55.153 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:33:55.153 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:33:55.153 12:53:37 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:33:55.153 00:33:55.153 real 0m51.060s 00:33:55.153 user 1m11.177s 00:33:55.153 sys 0m8.753s 00:33:55.153 12:53:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:55.153 12:53:37 -- common/autotest_common.sh@10 -- # set +x 00:33:55.153 ************************************ 00:33:55.153 END TEST blockdev_nvme_gpt 00:33:55.153 ************************************ 00:33:55.153 12:53:37 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:33:55.153 12:53:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:55.153 12:53:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:55.153 12:53:37 -- common/autotest_common.sh@10 -- # set +x 00:33:55.153 ************************************ 00:33:55.153 START TEST nvme 00:33:55.153 ************************************ 00:33:55.153 12:53:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:33:55.412 * Looking for test storage... 00:33:55.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:33:55.412 12:53:37 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:55.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:55.979 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:33:56.928 12:53:39 -- nvme/nvme.sh@79 -- # uname 00:33:56.928 12:53:39 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:33:56.928 12:53:39 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:33:56.928 12:53:39 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:33:56.928 12:53:39 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:33:56.928 12:53:39 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:33:56.928 12:53:39 -- common/autotest_common.sh@1045 -- # echo 0 00:33:56.928 12:53:39 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:33:56.928 12:53:39 -- common/autotest_common.sh@1047 -- # stubpid=139800 00:33:56.928 Waiting for stub to ready for secondary processes... 00:33:56.928 12:53:39 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:33:56.928 12:53:39 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:33:56.928 12:53:39 -- common/autotest_common.sh@1051 -- # [[ -e /proc/139800 ]] 00:33:56.928 12:53:39 -- common/autotest_common.sh@1052 -- # sleep 1s 00:33:57.189 [2024-10-01 12:53:39.476582] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:33:57.189 [2024-10-01 12:53:39.476713] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:58.130 12:53:40 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:33:58.130 12:53:40 -- common/autotest_common.sh@1051 -- # [[ -e /proc/139800 ]] 00:33:58.130 12:53:40 -- common/autotest_common.sh@1052 -- # sleep 1s 00:33:58.130 [2024-10-01 12:53:40.493425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:58.388 [2024-10-01 12:53:40.712491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:58.388 [2024-10-01 12:53:40.712701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.388 [2024-10-01 12:53:40.712698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:58.388 [2024-10-01 12:53:40.731968] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:33:58.388 [2024-10-01 12:53:40.746195] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:33:58.388 [2024-10-01 12:53:40.747216] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:33:58.956 12:53:41 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:33:58.956 done. 00:33:58.956 12:53:41 -- common/autotest_common.sh@1054 -- # echo done. 00:33:58.956 12:53:41 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:33:58.956 12:53:41 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:33:58.956 12:53:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:58.956 12:53:41 -- common/autotest_common.sh@10 -- # set +x 00:33:58.956 ************************************ 00:33:58.956 START TEST nvme_reset 00:33:58.956 ************************************ 00:33:58.956 12:53:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:33:59.526 Initializing NVMe Controllers 00:33:59.526 Skipping QEMU NVMe SSD at 0000:00:06.0 00:33:59.526 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:33:59.526 00:33:59.526 real 0m0.338s 00:33:59.526 user 0m0.106s 00:33:59.526 sys 0m0.161s 00:33:59.526 12:53:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:59.526 12:53:41 -- common/autotest_common.sh@10 -- # set +x 00:33:59.526 ************************************ 00:33:59.526 END TEST nvme_reset 00:33:59.526 ************************************ 00:33:59.526 12:53:41 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:33:59.526 12:53:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:59.526 12:53:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:59.526 12:53:41 -- common/autotest_common.sh@10 -- # set +x 00:33:59.526 ************************************ 00:33:59.526 START TEST nvme_identify 00:33:59.526 ************************************ 00:33:59.526 12:53:41 -- common/autotest_common.sh@1104 -- # nvme_identify 00:33:59.526 12:53:41 -- nvme/nvme.sh@12 -- # bdfs=() 00:33:59.526 12:53:41 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:33:59.526 12:53:41 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:33:59.526 12:53:41 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:33:59.526 12:53:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:59.526 12:53:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:59.526 12:53:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:59.526 12:53:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:59.526 12:53:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:59.526 12:53:41 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:59.526 12:53:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:33:59.526 12:53:41 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:33:59.786 [2024-10-01 12:53:42.245496] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 139839 terminated unexpected 00:33:59.786 ===================================================== 00:33:59.786 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:33:59.786 ===================================================== 00:33:59.786 Controller Capabilities/Features 00:33:59.786 ================================ 00:33:59.786 Vendor ID: 1b36 00:33:59.786 Subsystem Vendor ID: 1af4 00:33:59.786 Serial Number: 12340 00:33:59.786 Model Number: QEMU NVMe Ctrl 00:33:59.787 Firmware Version: 8.0.0 00:33:59.787 Recommended Arb Burst: 6 00:33:59.787 IEEE OUI Identifier: 00 54 52 00:33:59.787 Multi-path I/O 00:33:59.787 May have multiple subsystem ports: No 00:33:59.787 May have multiple controllers: No 00:33:59.787 Associated with SR-IOV VF: No 00:33:59.787 Max Data Transfer Size: 524288 00:33:59.787 Max Number of Namespaces: 256 00:33:59.787 Max Number of I/O Queues: 64 00:33:59.787 NVMe Specification Version (VS): 1.4 00:33:59.787 NVMe Specification Version (Identify): 1.4 00:33:59.787 Maximum Queue Entries: 2048 00:33:59.787 Contiguous Queues Required: Yes 00:33:59.787 Arbitration Mechanisms Supported 00:33:59.787 Weighted Round Robin: Not Supported 00:33:59.787 Vendor Specific: Not Supported 00:33:59.787 Reset Timeout: 7500 ms 00:33:59.787 Doorbell Stride: 4 bytes 00:33:59.787 NVM Subsystem Reset: Not Supported 00:33:59.787 Command Sets Supported 00:33:59.787 NVM Command Set: Supported 00:33:59.787 Boot Partition: Not Supported 00:33:59.787 Memory Page Size Minimum: 4096 bytes 00:33:59.787 Memory Page Size Maximum: 65536 bytes 00:33:59.787 Persistent Memory Region: Not Supported 00:33:59.787 Optional Asynchronous Events Supported 00:33:59.787 Namespace Attribute Notices: Supported 00:33:59.787 Firmware Activation Notices: Not Supported 00:33:59.787 ANA Change Notices: Not Supported 00:33:59.787 PLE Aggregate Log Change Notices: Not Supported 00:33:59.787 LBA Status Info Alert Notices: Not Supported 00:33:59.787 EGE Aggregate Log Change Notices: Not Supported 00:33:59.787 Normal NVM Subsystem Shutdown event: Not Supported 00:33:59.787 Zone Descriptor Change Notices: Not Supported 00:33:59.787 Discovery Log Change Notices: Not Supported 00:33:59.787 Controller Attributes 00:33:59.787 128-bit Host Identifier: Not Supported 00:33:59.787 Non-Operational Permissive Mode: Not Supported 00:33:59.787 NVM Sets: Not Supported 00:33:59.787 Read Recovery Levels: Not Supported 00:33:59.787 Endurance Groups: Not Supported 00:33:59.787 Predictable Latency Mode: Not Supported 00:33:59.787 Traffic Based Keep ALive: Not Supported 00:33:59.787 Namespace Granularity: Not Supported 00:33:59.787 SQ Associations: Not Supported 00:33:59.787 UUID List: Not Supported 00:33:59.787 Multi-Domain Subsystem: Not Supported 00:33:59.787 Fixed Capacity Management: Not Supported 00:33:59.787 Variable Capacity Management: Not Supported 00:33:59.787 Delete Endurance Group: Not Supported 00:33:59.787 Delete NVM Set: Not Supported 00:33:59.787 Extended LBA Formats Supported: Supported 00:33:59.787 Flexible Data Placement Supported: Not Supported 00:33:59.787 00:33:59.787 Controller Memory Buffer Support 00:33:59.787 ================================ 00:33:59.787 Supported: No 00:33:59.787 00:33:59.787 Persistent Memory Region Support 00:33:59.787 ================================ 00:33:59.787 Supported: No 00:33:59.787 00:33:59.787 Admin Command Set Attributes 00:33:59.787 ============================ 00:33:59.787 Security Send/Receive: Not Supported 00:33:59.787 Format NVM: Supported 00:33:59.787 Firmware Activate/Download: Not Supported 00:33:59.787 Namespace Management: Supported 00:33:59.787 Device Self-Test: Not Supported 00:33:59.787 Directives: Supported 00:33:59.787 NVMe-MI: Not Supported 00:33:59.787 Virtualization Management: Not Supported 00:33:59.787 Doorbell Buffer Config: Supported 00:33:59.787 Get LBA Status Capability: Not Supported 00:33:59.787 Command & Feature Lockdown Capability: Not Supported 00:33:59.787 Abort Command Limit: 4 00:33:59.787 Async Event Request Limit: 4 00:33:59.787 Number of Firmware Slots: N/A 00:33:59.787 Firmware Slot 1 Read-Only: N/A 00:33:59.787 Firmware Activation Without Reset: N/A 00:33:59.787 Multiple Update Detection Support: N/A 00:33:59.787 Firmware Update Granularity: No Information Provided 00:33:59.787 Per-Namespace SMART Log: Yes 00:33:59.787 Asymmetric Namespace Access Log Page: Not Supported 00:33:59.787 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:33:59.787 Command Effects Log Page: Supported 00:33:59.787 Get Log Page Extended Data: Supported 00:33:59.787 Telemetry Log Pages: Not Supported 00:33:59.787 Persistent Event Log Pages: Not Supported 00:33:59.787 Supported Log Pages Log Page: May Support 00:33:59.787 Commands Supported & Effects Log Page: Not Supported 00:33:59.787 Feature Identifiers & Effects Log Page:May Support 00:33:59.787 NVMe-MI Commands & Effects Log Page: May Support 00:33:59.787 Data Area 4 for Telemetry Log: Not Supported 00:33:59.787 Error Log Page Entries Supported: 1 00:33:59.787 Keep Alive: Not Supported 00:33:59.787 00:33:59.787 NVM Command Set Attributes 00:33:59.787 ========================== 00:33:59.787 Submission Queue Entry Size 00:33:59.787 Max: 64 00:33:59.787 Min: 64 00:33:59.787 Completion Queue Entry Size 00:33:59.787 Max: 16 00:33:59.787 Min: 16 00:33:59.787 Number of Namespaces: 256 00:33:59.787 Compare Command: Supported 00:33:59.787 Write Uncorrectable Command: Not Supported 00:33:59.787 Dataset Management Command: Supported 00:33:59.787 Write Zeroes Command: Supported 00:33:59.787 Set Features Save Field: Supported 00:33:59.787 Reservations: Not Supported 00:33:59.787 Timestamp: Supported 00:33:59.787 Copy: Supported 00:33:59.787 Volatile Write Cache: Present 00:33:59.787 Atomic Write Unit (Normal): 1 00:33:59.787 Atomic Write Unit (PFail): 1 00:33:59.787 Atomic Compare & Write Unit: 1 00:33:59.787 Fused Compare & Write: Not Supported 00:33:59.787 Scatter-Gather List 00:33:59.787 SGL Command Set: Supported 00:33:59.787 SGL Keyed: Not Supported 00:33:59.787 SGL Bit Bucket Descriptor: Not Supported 00:33:59.787 SGL Metadata Pointer: Not Supported 00:33:59.787 Oversized SGL: Not Supported 00:33:59.787 SGL Metadata Address: Not Supported 00:33:59.787 SGL Offset: Not Supported 00:33:59.787 Transport SGL Data Block: Not Supported 00:33:59.787 Replay Protected Memory Block: Not Supported 00:33:59.787 00:33:59.787 Firmware Slot Information 00:33:59.787 ========================= 00:33:59.787 Active slot: 1 00:33:59.787 Slot 1 Firmware Revision: 1.0 00:33:59.787 00:33:59.787 00:33:59.787 Commands Supported and Effects 00:33:59.787 ============================== 00:33:59.787 Admin Commands 00:33:59.787 -------------- 00:33:59.787 Delete I/O Submission Queue (00h): Supported 00:33:59.787 Create I/O Submission Queue (01h): Supported 00:33:59.787 Get Log Page (02h): Supported 00:33:59.787 Delete I/O Completion Queue (04h): Supported 00:33:59.787 Create I/O Completion Queue (05h): Supported 00:33:59.787 Identify (06h): Supported 00:33:59.787 Abort (08h): Supported 00:33:59.787 Set Features (09h): Supported 00:33:59.787 Get Features (0Ah): Supported 00:33:59.787 Asynchronous Event Request (0Ch): Supported 00:33:59.787 Namespace Attachment (15h): Supported NS-Inventory-Change 00:33:59.787 Directive Send (19h): Supported 00:33:59.787 Directive Receive (1Ah): Supported 00:33:59.787 Virtualization Management (1Ch): Supported 00:33:59.787 Doorbell Buffer Config (7Ch): Supported 00:33:59.787 Format NVM (80h): Supported LBA-Change 00:33:59.787 I/O Commands 00:33:59.787 ------------ 00:33:59.787 Flush (00h): Supported LBA-Change 00:33:59.787 Write (01h): Supported LBA-Change 00:33:59.787 Read (02h): Supported 00:33:59.787 Compare (05h): Supported 00:33:59.787 Write Zeroes (08h): Supported LBA-Change 00:33:59.787 Dataset Management (09h): Supported LBA-Change 00:33:59.787 Unknown (0Ch): Supported 00:33:59.787 Unknown (12h): Supported 00:33:59.787 Copy (19h): Supported LBA-Change 00:33:59.787 Unknown (1Dh): Supported LBA-Change 00:33:59.787 00:33:59.787 Error Log 00:33:59.787 ========= 00:33:59.787 00:33:59.787 Arbitration 00:33:59.787 =========== 00:33:59.787 Arbitration Burst: no limit 00:33:59.787 00:33:59.787 Power Management 00:33:59.787 ================ 00:33:59.787 Number of Power States: 1 00:33:59.787 Current Power State: Power State #0 00:33:59.787 Power State #0: 00:33:59.787 Max Power: 25.00 W 00:33:59.787 Non-Operational State: Operational 00:33:59.787 Entry Latency: 16 microseconds 00:33:59.787 Exit Latency: 4 microseconds 00:33:59.787 Relative Read Throughput: 0 00:33:59.787 Relative Read Latency: 0 00:33:59.787 Relative Write Throughput: 0 00:33:59.787 Relative Write Latency: 0 00:33:59.787 Idle Power: Not Reported 00:33:59.787 Active Power: Not Reported 00:33:59.787 Non-Operational Permissive Mode: Not Supported 00:33:59.787 00:33:59.787 Health Information 00:33:59.787 ================== 00:33:59.787 Critical Warnings: 00:33:59.787 Available Spare Space: OK 00:33:59.787 Temperature: OK 00:33:59.787 Device Reliability: OK 00:33:59.787 Read Only: No 00:33:59.787 Volatile Memory Backup: OK 00:33:59.788 Current Temperature: 323 Kelvin (50 Celsius) 00:33:59.788 Temperature Threshold: 343 Kelvin (70 Celsius) 00:33:59.788 Available Spare: 0% 00:33:59.788 Available Spare Threshold: 0% 00:33:59.788 Life Percentage Used: 0% 00:33:59.788 Data Units Read: 8117 00:33:59.788 Data Units Written: 3969 00:33:59.788 Host Read Commands: 316574 00:33:59.788 Host Write Commands: 173569 00:33:59.788 Controller Busy Time: 0 minutes 00:33:59.788 Power Cycles: 0 00:33:59.788 Power On Hours: 0 hours 00:33:59.788 Unsafe Shutdowns: 0 00:33:59.788 Unrecoverable Media Errors: 0 00:33:59.788 Lifetime Error Log Entries: 0 00:33:59.788 Warning Temperature Time: 0 minutes 00:33:59.788 Critical Temperature Time: 0 minutes 00:33:59.788 00:33:59.788 Number of Queues 00:33:59.788 ================ 00:33:59.788 Number of I/O Submission Queues: 64 00:33:59.788 Number of I/O Completion Queues: 64 00:33:59.788 00:33:59.788 ZNS Specific Controller Data 00:33:59.788 ============================ 00:33:59.788 Zone Append Size Limit: 0 00:33:59.788 00:33:59.788 00:33:59.788 Active Namespaces 00:33:59.788 ================= 00:33:59.788 Namespace ID:1 00:33:59.788 Error Recovery Timeout: Unlimited 00:33:59.788 Command Set Identifier: NVM (00h) 00:33:59.788 Deallocate: Supported 00:33:59.788 Deallocated/Unwritten Error: Supported 00:33:59.788 Deallocated Read Value: All 0x00 00:33:59.788 Deallocate in Write Zeroes: Not Supported 00:33:59.788 Deallocated Guard Field: 0xFFFF 00:33:59.788 Flush: Supported 00:33:59.788 Reservation: Not Supported 00:33:59.788 Namespace Sharing Capabilities: Private 00:33:59.788 Size (in LBAs): 1310720 (5GiB) 00:33:59.788 Capacity (in LBAs): 1310720 (5GiB) 00:33:59.788 Utilization (in LBAs): 1310720 (5GiB) 00:33:59.788 Thin Provisioning: Not Supported 00:33:59.788 Per-NS Atomic Units: No 00:33:59.788 Maximum Single Source Range Length: 128 00:33:59.788 Maximum Copy Length: 128 00:33:59.788 Maximum Source Range Count: 128 00:33:59.788 NGUID/EUI64 Never Reused: No 00:33:59.788 Namespace Write Protected: No 00:33:59.788 Number of LBA Formats: 8 00:33:59.788 Current LBA Format: LBA Format #04 00:33:59.788 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:59.788 LBA Format #01: Data Size: 512 Metadata Size: 8 00:33:59.788 LBA Format #02: Data Size: 512 Metadata Size: 16 00:33:59.788 LBA Format #03: Data Size: 512 Metadata Size: 64 00:33:59.788 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:33:59.788 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:33:59.788 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:33:59.788 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:33:59.788 00:33:59.788 12:53:42 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:33:59.788 12:53:42 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:34:00.356 ===================================================== 00:34:00.356 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:34:00.356 ===================================================== 00:34:00.356 Controller Capabilities/Features 00:34:00.356 ================================ 00:34:00.356 Vendor ID: 1b36 00:34:00.356 Subsystem Vendor ID: 1af4 00:34:00.356 Serial Number: 12340 00:34:00.356 Model Number: QEMU NVMe Ctrl 00:34:00.356 Firmware Version: 8.0.0 00:34:00.356 Recommended Arb Burst: 6 00:34:00.356 IEEE OUI Identifier: 00 54 52 00:34:00.356 Multi-path I/O 00:34:00.356 May have multiple subsystem ports: No 00:34:00.356 May have multiple controllers: No 00:34:00.356 Associated with SR-IOV VF: No 00:34:00.356 Max Data Transfer Size: 524288 00:34:00.356 Max Number of Namespaces: 256 00:34:00.356 Max Number of I/O Queues: 64 00:34:00.356 NVMe Specification Version (VS): 1.4 00:34:00.356 NVMe Specification Version (Identify): 1.4 00:34:00.356 Maximum Queue Entries: 2048 00:34:00.356 Contiguous Queues Required: Yes 00:34:00.356 Arbitration Mechanisms Supported 00:34:00.356 Weighted Round Robin: Not Supported 00:34:00.356 Vendor Specific: Not Supported 00:34:00.356 Reset Timeout: 7500 ms 00:34:00.356 Doorbell Stride: 4 bytes 00:34:00.356 NVM Subsystem Reset: Not Supported 00:34:00.356 Command Sets Supported 00:34:00.356 NVM Command Set: Supported 00:34:00.356 Boot Partition: Not Supported 00:34:00.356 Memory Page Size Minimum: 4096 bytes 00:34:00.356 Memory Page Size Maximum: 65536 bytes 00:34:00.356 Persistent Memory Region: Not Supported 00:34:00.356 Optional Asynchronous Events Supported 00:34:00.356 Namespace Attribute Notices: Supported 00:34:00.356 Firmware Activation Notices: Not Supported 00:34:00.356 ANA Change Notices: Not Supported 00:34:00.356 PLE Aggregate Log Change Notices: Not Supported 00:34:00.356 LBA Status Info Alert Notices: Not Supported 00:34:00.356 EGE Aggregate Log Change Notices: Not Supported 00:34:00.356 Normal NVM Subsystem Shutdown event: Not Supported 00:34:00.356 Zone Descriptor Change Notices: Not Supported 00:34:00.356 Discovery Log Change Notices: Not Supported 00:34:00.356 Controller Attributes 00:34:00.356 128-bit Host Identifier: Not Supported 00:34:00.356 Non-Operational Permissive Mode: Not Supported 00:34:00.356 NVM Sets: Not Supported 00:34:00.356 Read Recovery Levels: Not Supported 00:34:00.356 Endurance Groups: Not Supported 00:34:00.356 Predictable Latency Mode: Not Supported 00:34:00.356 Traffic Based Keep ALive: Not Supported 00:34:00.356 Namespace Granularity: Not Supported 00:34:00.356 SQ Associations: Not Supported 00:34:00.356 UUID List: Not Supported 00:34:00.356 Multi-Domain Subsystem: Not Supported 00:34:00.356 Fixed Capacity Management: Not Supported 00:34:00.356 Variable Capacity Management: Not Supported 00:34:00.356 Delete Endurance Group: Not Supported 00:34:00.356 Delete NVM Set: Not Supported 00:34:00.356 Extended LBA Formats Supported: Supported 00:34:00.356 Flexible Data Placement Supported: Not Supported 00:34:00.356 00:34:00.356 Controller Memory Buffer Support 00:34:00.356 ================================ 00:34:00.356 Supported: No 00:34:00.356 00:34:00.356 Persistent Memory Region Support 00:34:00.356 ================================ 00:34:00.356 Supported: No 00:34:00.356 00:34:00.356 Admin Command Set Attributes 00:34:00.356 ============================ 00:34:00.356 Security Send/Receive: Not Supported 00:34:00.356 Format NVM: Supported 00:34:00.356 Firmware Activate/Download: Not Supported 00:34:00.356 Namespace Management: Supported 00:34:00.356 Device Self-Test: Not Supported 00:34:00.356 Directives: Supported 00:34:00.356 NVMe-MI: Not Supported 00:34:00.356 Virtualization Management: Not Supported 00:34:00.356 Doorbell Buffer Config: Supported 00:34:00.356 Get LBA Status Capability: Not Supported 00:34:00.356 Command & Feature Lockdown Capability: Not Supported 00:34:00.356 Abort Command Limit: 4 00:34:00.356 Async Event Request Limit: 4 00:34:00.356 Number of Firmware Slots: N/A 00:34:00.356 Firmware Slot 1 Read-Only: N/A 00:34:00.356 Firmware Activation Without Reset: N/A 00:34:00.356 Multiple Update Detection Support: N/A 00:34:00.357 Firmware Update Granularity: No Information Provided 00:34:00.357 Per-Namespace SMART Log: Yes 00:34:00.357 Asymmetric Namespace Access Log Page: Not Supported 00:34:00.357 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:34:00.357 Command Effects Log Page: Supported 00:34:00.357 Get Log Page Extended Data: Supported 00:34:00.357 Telemetry Log Pages: Not Supported 00:34:00.357 Persistent Event Log Pages: Not Supported 00:34:00.357 Supported Log Pages Log Page: May Support 00:34:00.357 Commands Supported & Effects Log Page: Not Supported 00:34:00.357 Feature Identifiers & Effects Log Page:May Support 00:34:00.357 NVMe-MI Commands & Effects Log Page: May Support 00:34:00.357 Data Area 4 for Telemetry Log: Not Supported 00:34:00.357 Error Log Page Entries Supported: 1 00:34:00.357 Keep Alive: Not Supported 00:34:00.357 00:34:00.357 NVM Command Set Attributes 00:34:00.357 ========================== 00:34:00.357 Submission Queue Entry Size 00:34:00.357 Max: 64 00:34:00.357 Min: 64 00:34:00.357 Completion Queue Entry Size 00:34:00.357 Max: 16 00:34:00.357 Min: 16 00:34:00.357 Number of Namespaces: 256 00:34:00.357 Compare Command: Supported 00:34:00.357 Write Uncorrectable Command: Not Supported 00:34:00.357 Dataset Management Command: Supported 00:34:00.357 Write Zeroes Command: Supported 00:34:00.357 Set Features Save Field: Supported 00:34:00.357 Reservations: Not Supported 00:34:00.357 Timestamp: Supported 00:34:00.357 Copy: Supported 00:34:00.357 Volatile Write Cache: Present 00:34:00.357 Atomic Write Unit (Normal): 1 00:34:00.357 Atomic Write Unit (PFail): 1 00:34:00.357 Atomic Compare & Write Unit: 1 00:34:00.357 Fused Compare & Write: Not Supported 00:34:00.357 Scatter-Gather List 00:34:00.357 SGL Command Set: Supported 00:34:00.357 SGL Keyed: Not Supported 00:34:00.357 SGL Bit Bucket Descriptor: Not Supported 00:34:00.357 SGL Metadata Pointer: Not Supported 00:34:00.357 Oversized SGL: Not Supported 00:34:00.357 SGL Metadata Address: Not Supported 00:34:00.357 SGL Offset: Not Supported 00:34:00.357 Transport SGL Data Block: Not Supported 00:34:00.357 Replay Protected Memory Block: Not Supported 00:34:00.357 00:34:00.357 Firmware Slot Information 00:34:00.357 ========================= 00:34:00.357 Active slot: 1 00:34:00.357 Slot 1 Firmware Revision: 1.0 00:34:00.357 00:34:00.357 00:34:00.357 Commands Supported and Effects 00:34:00.357 ============================== 00:34:00.357 Admin Commands 00:34:00.357 -------------- 00:34:00.357 Delete I/O Submission Queue (00h): Supported 00:34:00.357 Create I/O Submission Queue (01h): Supported 00:34:00.357 Get Log Page (02h): Supported 00:34:00.357 Delete I/O Completion Queue (04h): Supported 00:34:00.357 Create I/O Completion Queue (05h): Supported 00:34:00.357 Identify (06h): Supported 00:34:00.357 Abort (08h): Supported 00:34:00.357 Set Features (09h): Supported 00:34:00.357 Get Features (0Ah): Supported 00:34:00.357 Asynchronous Event Request (0Ch): Supported 00:34:00.357 Namespace Attachment (15h): Supported NS-Inventory-Change 00:34:00.357 Directive Send (19h): Supported 00:34:00.357 Directive Receive (1Ah): Supported 00:34:00.357 Virtualization Management (1Ch): Supported 00:34:00.357 Doorbell Buffer Config (7Ch): Supported 00:34:00.357 Format NVM (80h): Supported LBA-Change 00:34:00.357 I/O Commands 00:34:00.357 ------------ 00:34:00.357 Flush (00h): Supported LBA-Change 00:34:00.357 Write (01h): Supported LBA-Change 00:34:00.357 Read (02h): Supported 00:34:00.357 Compare (05h): Supported 00:34:00.357 Write Zeroes (08h): Supported LBA-Change 00:34:00.357 Dataset Management (09h): Supported LBA-Change 00:34:00.357 Unknown (0Ch): Supported 00:34:00.357 Unknown (12h): Supported 00:34:00.357 Copy (19h): Supported LBA-Change 00:34:00.357 Unknown (1Dh): Supported LBA-Change 00:34:00.357 00:34:00.357 Error Log 00:34:00.357 ========= 00:34:00.357 00:34:00.357 Arbitration 00:34:00.357 =========== 00:34:00.357 Arbitration Burst: no limit 00:34:00.357 00:34:00.357 Power Management 00:34:00.357 ================ 00:34:00.357 Number of Power States: 1 00:34:00.357 Current Power State: Power State #0 00:34:00.357 Power State #0: 00:34:00.357 Max Power: 25.00 W 00:34:00.357 Non-Operational State: Operational 00:34:00.357 Entry Latency: 16 microseconds 00:34:00.357 Exit Latency: 4 microseconds 00:34:00.357 Relative Read Throughput: 0 00:34:00.357 Relative Read Latency: 0 00:34:00.357 Relative Write Throughput: 0 00:34:00.357 Relative Write Latency: 0 00:34:00.357 Idle Power: Not Reported 00:34:00.357 Active Power: Not Reported 00:34:00.357 Non-Operational Permissive Mode: Not Supported 00:34:00.357 00:34:00.357 Health Information 00:34:00.357 ================== 00:34:00.357 Critical Warnings: 00:34:00.357 Available Spare Space: OK 00:34:00.357 Temperature: OK 00:34:00.357 Device Reliability: OK 00:34:00.357 Read Only: No 00:34:00.357 Volatile Memory Backup: OK 00:34:00.357 Current Temperature: 323 Kelvin (50 Celsius) 00:34:00.357 Temperature Threshold: 343 Kelvin (70 Celsius) 00:34:00.357 Available Spare: 0% 00:34:00.357 Available Spare Threshold: 0% 00:34:00.357 Life Percentage Used: 0% 00:34:00.357 Data Units Read: 8117 00:34:00.357 Data Units Written: 3969 00:34:00.357 Host Read Commands: 316574 00:34:00.357 Host Write Commands: 173569 00:34:00.357 Controller Busy Time: 0 minutes 00:34:00.357 Power Cycles: 0 00:34:00.357 Power On Hours: 0 hours 00:34:00.357 Unsafe Shutdowns: 0 00:34:00.357 Unrecoverable Media Errors: 0 00:34:00.357 Lifetime Error Log Entries: 0 00:34:00.357 Warning Temperature Time: 0 minutes 00:34:00.357 Critical Temperature Time: 0 minutes 00:34:00.357 00:34:00.357 Number of Queues 00:34:00.357 ================ 00:34:00.357 Number of I/O Submission Queues: 64 00:34:00.357 Number of I/O Completion Queues: 64 00:34:00.357 00:34:00.357 ZNS Specific Controller Data 00:34:00.357 ============================ 00:34:00.357 Zone Append Size Limit: 0 00:34:00.357 00:34:00.357 00:34:00.357 Active Namespaces 00:34:00.357 ================= 00:34:00.357 Namespace ID:1 00:34:00.357 Error Recovery Timeout: Unlimited 00:34:00.357 Command Set Identifier: NVM (00h) 00:34:00.357 Deallocate: Supported 00:34:00.357 Deallocated/Unwritten Error: Supported 00:34:00.357 Deallocated Read Value: All 0x00 00:34:00.357 Deallocate in Write Zeroes: Not Supported 00:34:00.357 Deallocated Guard Field: 0xFFFF 00:34:00.357 Flush: Supported 00:34:00.357 Reservation: Not Supported 00:34:00.357 Namespace Sharing Capabilities: Private 00:34:00.357 Size (in LBAs): 1310720 (5GiB) 00:34:00.357 Capacity (in LBAs): 1310720 (5GiB) 00:34:00.357 Utilization (in LBAs): 1310720 (5GiB) 00:34:00.357 Thin Provisioning: Not Supported 00:34:00.357 Per-NS Atomic Units: No 00:34:00.357 Maximum Single Source Range Length: 128 00:34:00.357 Maximum Copy Length: 128 00:34:00.357 Maximum Source Range Count: 128 00:34:00.357 NGUID/EUI64 Never Reused: No 00:34:00.357 Namespace Write Protected: No 00:34:00.357 Number of LBA Formats: 8 00:34:00.357 Current LBA Format: LBA Format #04 00:34:00.357 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:00.357 LBA Format #01: Data Size: 512 Metadata Size: 8 00:34:00.357 LBA Format #02: Data Size: 512 Metadata Size: 16 00:34:00.357 LBA Format #03: Data Size: 512 Metadata Size: 64 00:34:00.357 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:34:00.357 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:34:00.357 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:34:00.357 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:34:00.357 00:34:00.357 00:34:00.357 real 0m0.806s 00:34:00.357 user 0m0.296s 00:34:00.357 sys 0m0.414s 00:34:00.357 12:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:00.357 12:53:42 -- common/autotest_common.sh@10 -- # set +x 00:34:00.357 ************************************ 00:34:00.357 END TEST nvme_identify 00:34:00.357 ************************************ 00:34:00.357 12:53:42 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:34:00.357 12:53:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:00.357 12:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:00.357 12:53:42 -- common/autotest_common.sh@10 -- # set +x 00:34:00.357 ************************************ 00:34:00.357 START TEST nvme_perf 00:34:00.357 ************************************ 00:34:00.357 12:53:42 -- common/autotest_common.sh@1104 -- # nvme_perf 00:34:00.357 12:53:42 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:34:01.735 Initializing NVMe Controllers 00:34:01.735 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:34:01.735 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:34:01.735 Initialization complete. Launching workers. 00:34:01.735 ======================================================== 00:34:01.735 Latency(us) 00:34:01.735 Device Information : IOPS MiB/s Average min max 00:34:01.735 PCIE (0000:00:06.0) NSID 1 from core 0: 53504.00 627.00 2393.04 1120.00 8278.03 00:34:01.735 ======================================================== 00:34:01.735 Total : 53504.00 627.00 2393.04 1120.00 8278.03 00:34:01.735 00:34:01.735 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:34:01.735 ================================================================================= 00:34:01.735 1.00000% : 1394.943us 00:34:01.735 10.00000% : 1671.300us 00:34:01.735 25.00000% : 1947.656us 00:34:01.735 50.00000% : 2368.771us 00:34:01.735 75.00000% : 2776.726us 00:34:01.735 90.00000% : 3079.402us 00:34:01.735 95.00000% : 3368.919us 00:34:01.735 98.00000% : 3816.353us 00:34:01.735 99.00000% : 4105.870us 00:34:01.735 99.50000% : 4342.747us 00:34:01.735 99.90000% : 6790.477us 00:34:01.735 99.99000% : 8159.100us 00:34:01.735 99.99900% : 8317.018us 00:34:01.735 99.99990% : 8317.018us 00:34:01.735 99.99999% : 8317.018us 00:34:01.735 00:34:01.735 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:34:01.735 ============================================================================== 00:34:01.735 Range in us Cumulative IO count 00:34:01.735 1118.586 - 1125.166: 0.0019% ( 1) 00:34:01.735 1125.166 - 1131.746: 0.0037% ( 1) 00:34:01.735 1131.746 - 1138.326: 0.0056% ( 1) 00:34:01.735 1144.906 - 1151.486: 0.0075% ( 1) 00:34:01.735 1151.486 - 1158.066: 0.0093% ( 1) 00:34:01.735 1158.066 - 1164.646: 0.0131% ( 2) 00:34:01.735 1171.226 - 1177.806: 0.0187% ( 3) 00:34:01.735 1184.386 - 1190.965: 0.0243% ( 3) 00:34:01.735 1197.545 - 1204.125: 0.0299% ( 3) 00:34:01.735 1210.705 - 1217.285: 0.0374% ( 4) 00:34:01.735 1217.285 - 1223.865: 0.0430% ( 3) 00:34:01.735 1223.865 - 1230.445: 0.0542% ( 6) 00:34:01.735 1230.445 - 1237.025: 0.0710% ( 9) 00:34:01.735 1237.025 - 1243.605: 0.0785% ( 4) 00:34:01.735 1243.605 - 1250.185: 0.0935% ( 8) 00:34:01.735 1250.185 - 1256.765: 0.1047% ( 6) 00:34:01.735 1256.765 - 1263.345: 0.1159% ( 6) 00:34:01.735 1263.345 - 1269.924: 0.1327% ( 9) 00:34:01.735 1269.924 - 1276.504: 0.1570% ( 13) 00:34:01.735 1276.504 - 1283.084: 0.1701% ( 7) 00:34:01.735 1283.084 - 1289.664: 0.2019% ( 17) 00:34:01.735 1289.664 - 1296.244: 0.2262% ( 13) 00:34:01.735 1296.244 - 1302.824: 0.2617% ( 19) 00:34:01.735 1302.824 - 1309.404: 0.2897% ( 15) 00:34:01.735 1309.404 - 1315.984: 0.3215% ( 17) 00:34:01.735 1315.984 - 1322.564: 0.3682% ( 25) 00:34:01.735 1322.564 - 1329.144: 0.4018% ( 18) 00:34:01.735 1329.144 - 1335.724: 0.4430% ( 22) 00:34:01.735 1335.724 - 1342.304: 0.4822% ( 21) 00:34:01.735 1342.304 - 1348.884: 0.5495% ( 36) 00:34:01.735 1348.884 - 1355.463: 0.6000% ( 27) 00:34:01.735 1355.463 - 1362.043: 0.6635% ( 34) 00:34:01.735 1362.043 - 1368.623: 0.7289% ( 35) 00:34:01.735 1368.623 - 1375.203: 0.8018% ( 39) 00:34:01.735 1375.203 - 1381.783: 0.8710% ( 37) 00:34:01.735 1381.783 - 1388.363: 0.9476% ( 41) 00:34:01.735 1388.363 - 1394.943: 1.0392% ( 49) 00:34:01.735 1394.943 - 1401.523: 1.1195% ( 43) 00:34:01.735 1401.523 - 1408.103: 1.2279% ( 58) 00:34:01.735 1408.103 - 1414.683: 1.3158% ( 47) 00:34:01.735 1414.683 - 1421.263: 1.4186% ( 55) 00:34:01.735 1421.263 - 1427.843: 1.5419% ( 66) 00:34:01.735 1427.843 - 1434.422: 1.6690% ( 68) 00:34:01.735 1434.422 - 1441.002: 1.7849% ( 62) 00:34:01.735 1441.002 - 1447.582: 1.9083% ( 66) 00:34:01.735 1447.582 - 1454.162: 2.0541% ( 78) 00:34:01.735 1454.162 - 1460.742: 2.2279% ( 93) 00:34:01.735 1460.742 - 1467.322: 2.3774% ( 80) 00:34:01.735 1467.322 - 1473.902: 2.5512% ( 93) 00:34:01.735 1473.902 - 1480.482: 2.7026% ( 81) 00:34:01.735 1480.482 - 1487.062: 2.9063% ( 109) 00:34:01.735 1487.062 - 1493.642: 3.0914% ( 99) 00:34:01.735 1493.642 - 1500.222: 3.2839% ( 103) 00:34:01.735 1500.222 - 1506.802: 3.4726% ( 101) 00:34:01.735 1506.802 - 1513.382: 3.7137% ( 129) 00:34:01.735 1513.382 - 1519.961: 3.9044% ( 102) 00:34:01.735 1519.961 - 1526.541: 4.1380% ( 125) 00:34:01.735 1526.541 - 1533.121: 4.3548% ( 116) 00:34:01.735 1533.121 - 1539.701: 4.6034% ( 133) 00:34:01.735 1539.701 - 1546.281: 4.8445% ( 129) 00:34:01.735 1546.281 - 1552.861: 5.0987% ( 136) 00:34:01.735 1552.861 - 1559.441: 5.3342% ( 126) 00:34:01.735 1559.441 - 1566.021: 5.5865% ( 135) 00:34:01.735 1566.021 - 1572.601: 5.8556% ( 144) 00:34:01.735 1572.601 - 1579.181: 6.1117% ( 137) 00:34:01.735 1579.181 - 1585.761: 6.3864% ( 147) 00:34:01.735 1585.761 - 1592.341: 6.6518% ( 142) 00:34:01.735 1592.341 - 1598.920: 6.9378% ( 153) 00:34:01.735 1598.920 - 1605.500: 7.2219% ( 152) 00:34:01.735 1605.500 - 1612.080: 7.5078% ( 153) 00:34:01.735 1612.080 - 1618.660: 7.7994% ( 156) 00:34:01.735 1618.660 - 1625.240: 8.0872% ( 154) 00:34:01.735 1625.240 - 1631.820: 8.3882% ( 161) 00:34:01.735 1631.820 - 1638.400: 8.6909% ( 162) 00:34:01.735 1638.400 - 1644.980: 8.9844% ( 157) 00:34:01.735 1644.980 - 1651.560: 9.3171% ( 178) 00:34:01.735 1651.560 - 1658.140: 9.6180% ( 161) 00:34:01.735 1658.140 - 1664.720: 9.9600% ( 183) 00:34:01.735 1664.720 - 1671.300: 10.2590% ( 160) 00:34:01.735 1671.300 - 1677.880: 10.5899% ( 177) 00:34:01.735 1677.880 - 1684.459: 10.9076% ( 170) 00:34:01.735 1684.459 - 1697.619: 11.5561% ( 347) 00:34:01.735 1697.619 - 1710.779: 12.2514% ( 372) 00:34:01.735 1710.779 - 1723.939: 12.9317% ( 364) 00:34:01.735 1723.939 - 1737.099: 13.6233% ( 370) 00:34:01.735 1737.099 - 1750.259: 14.2999% ( 362) 00:34:01.735 1750.259 - 1763.418: 15.0026% ( 376) 00:34:01.735 1763.418 - 1776.578: 15.7203% ( 384) 00:34:01.735 1776.578 - 1789.738: 16.4436% ( 387) 00:34:01.735 1789.738 - 1802.898: 17.1725% ( 390) 00:34:01.735 1802.898 - 1816.058: 17.9033% ( 391) 00:34:01.735 1816.058 - 1829.218: 18.6453% ( 397) 00:34:01.735 1829.218 - 1842.378: 19.3911% ( 399) 00:34:01.735 1842.378 - 1855.537: 20.1275% ( 394) 00:34:01.735 1855.537 - 1868.697: 20.8639% ( 394) 00:34:01.735 1868.697 - 1881.857: 21.6189% ( 404) 00:34:01.735 1881.857 - 1895.017: 22.3666% ( 400) 00:34:01.736 1895.017 - 1908.177: 23.1235% ( 405) 00:34:01.736 1908.177 - 1921.337: 23.8786% ( 404) 00:34:01.736 1921.337 - 1934.496: 24.6393% ( 407) 00:34:01.736 1934.496 - 1947.656: 25.4000% ( 407) 00:34:01.736 1947.656 - 1960.816: 26.1401% ( 396) 00:34:01.736 1960.816 - 1973.976: 26.9064% ( 410) 00:34:01.736 1973.976 - 1987.136: 27.6447% ( 395) 00:34:01.736 1987.136 - 2000.296: 28.4203% ( 415) 00:34:01.736 2000.296 - 2013.455: 29.1959% ( 415) 00:34:01.736 2013.455 - 2026.615: 29.9697% ( 414) 00:34:01.736 2026.615 - 2039.775: 30.7323% ( 408) 00:34:01.736 2039.775 - 2052.935: 31.4892% ( 405) 00:34:01.736 2052.935 - 2066.095: 32.2761% ( 421) 00:34:01.736 2066.095 - 2079.255: 33.0461% ( 412) 00:34:01.736 2079.255 - 2092.414: 33.8162% ( 412) 00:34:01.736 2092.414 - 2105.574: 34.5993% ( 419) 00:34:01.736 2105.574 - 2118.734: 35.3731% ( 414) 00:34:01.736 2118.734 - 2131.894: 36.1637% ( 423) 00:34:01.736 2131.894 - 2145.054: 36.9393% ( 415) 00:34:01.736 2145.054 - 2158.214: 37.7280% ( 422) 00:34:01.736 2158.214 - 2171.373: 38.5018% ( 414) 00:34:01.736 2171.373 - 2184.533: 39.2924% ( 423) 00:34:01.736 2184.533 - 2197.693: 40.0942% ( 429) 00:34:01.736 2197.693 - 2210.853: 40.8829% ( 422) 00:34:01.736 2210.853 - 2224.013: 41.6474% ( 409) 00:34:01.736 2224.013 - 2237.173: 42.4473% ( 428) 00:34:01.736 2237.173 - 2250.333: 43.2416% ( 425) 00:34:01.736 2250.333 - 2263.492: 44.0341% ( 424) 00:34:01.736 2263.492 - 2276.652: 44.8322% ( 427) 00:34:01.736 2276.652 - 2289.812: 45.6396% ( 432) 00:34:01.736 2289.812 - 2302.972: 46.4358% ( 426) 00:34:01.736 2302.972 - 2316.132: 47.2170% ( 418) 00:34:01.736 2316.132 - 2329.292: 47.9964% ( 417) 00:34:01.736 2329.292 - 2342.451: 48.7964% ( 428) 00:34:01.736 2342.451 - 2355.611: 49.6131% ( 437) 00:34:01.736 2355.611 - 2368.771: 50.4112% ( 427) 00:34:01.736 2368.771 - 2381.931: 51.2149% ( 430) 00:34:01.736 2381.931 - 2395.091: 52.0092% ( 425) 00:34:01.736 2395.091 - 2408.251: 52.8260% ( 437) 00:34:01.736 2408.251 - 2421.410: 53.6296% ( 430) 00:34:01.736 2421.410 - 2434.570: 54.4090% ( 417) 00:34:01.736 2434.570 - 2447.730: 55.2258% ( 437) 00:34:01.736 2447.730 - 2460.890: 56.0519% ( 442) 00:34:01.736 2460.890 - 2474.050: 56.8574% ( 431) 00:34:01.736 2474.050 - 2487.210: 57.6817% ( 441) 00:34:01.736 2487.210 - 2500.369: 58.4872% ( 431) 00:34:01.736 2500.369 - 2513.529: 59.2834% ( 426) 00:34:01.736 2513.529 - 2526.689: 60.0927% ( 433) 00:34:01.736 2526.689 - 2539.849: 60.8983% ( 431) 00:34:01.736 2539.849 - 2553.009: 61.7001% ( 429) 00:34:01.736 2553.009 - 2566.169: 62.5206% ( 439) 00:34:01.736 2566.169 - 2579.329: 63.3504% ( 444) 00:34:01.736 2579.329 - 2592.488: 64.1653% ( 436) 00:34:01.736 2592.488 - 2605.648: 64.9708% ( 431) 00:34:01.736 2605.648 - 2618.808: 65.7913% ( 439) 00:34:01.736 2618.808 - 2631.968: 66.6137% ( 440) 00:34:01.736 2631.968 - 2645.128: 67.4155% ( 429) 00:34:01.736 2645.128 - 2658.288: 68.2323% ( 437) 00:34:01.736 2658.288 - 2671.447: 69.0509% ( 438) 00:34:01.736 2671.447 - 2684.607: 69.8845% ( 446) 00:34:01.736 2684.607 - 2697.767: 70.7162% ( 445) 00:34:01.736 2697.767 - 2710.927: 71.5535% ( 448) 00:34:01.736 2710.927 - 2724.087: 72.3609% ( 432) 00:34:01.736 2724.087 - 2737.247: 73.1702% ( 433) 00:34:01.736 2737.247 - 2750.406: 73.9720% ( 429) 00:34:01.736 2750.406 - 2763.566: 74.7813% ( 433) 00:34:01.736 2763.566 - 2776.726: 75.6037% ( 440) 00:34:01.736 2776.726 - 2789.886: 76.4148% ( 434) 00:34:01.736 2789.886 - 2803.046: 77.2204% ( 431) 00:34:01.736 2803.046 - 2816.206: 78.0166% ( 426) 00:34:01.736 2816.206 - 2829.365: 78.7960% ( 417) 00:34:01.736 2829.365 - 2842.525: 79.5754% ( 417) 00:34:01.736 2842.525 - 2855.685: 80.3398% ( 409) 00:34:01.736 2855.685 - 2868.845: 81.0706% ( 391) 00:34:01.736 2868.845 - 2882.005: 81.8201% ( 401) 00:34:01.736 2882.005 - 2895.165: 82.5284% ( 379) 00:34:01.736 2895.165 - 2908.324: 83.2424% ( 382) 00:34:01.736 2908.324 - 2921.484: 83.9433% ( 375) 00:34:01.736 2921.484 - 2934.644: 84.6068% ( 355) 00:34:01.736 2934.644 - 2947.804: 85.2385% ( 338) 00:34:01.736 2947.804 - 2960.964: 85.8440% ( 324) 00:34:01.736 2960.964 - 2974.124: 86.4291% ( 313) 00:34:01.736 2974.124 - 2987.284: 86.9991% ( 305) 00:34:01.736 2987.284 - 3000.443: 87.5299% ( 284) 00:34:01.736 3000.443 - 3013.603: 88.0570% ( 282) 00:34:01.736 3013.603 - 3026.763: 88.5336% ( 255) 00:34:01.736 3026.763 - 3039.923: 88.9933% ( 246) 00:34:01.736 3039.923 - 3053.083: 89.4064% ( 221) 00:34:01.736 3053.083 - 3066.243: 89.7970% ( 209) 00:34:01.736 3066.243 - 3079.402: 90.1764% ( 203) 00:34:01.736 3079.402 - 3092.562: 90.5222% ( 185) 00:34:01.736 3092.562 - 3105.722: 90.8642% ( 183) 00:34:01.736 3105.722 - 3118.882: 91.1857% ( 172) 00:34:01.736 3118.882 - 3132.042: 91.5016% ( 169) 00:34:01.736 3132.042 - 3145.202: 91.7931% ( 156) 00:34:01.736 3145.202 - 3158.361: 92.0679% ( 147) 00:34:01.736 3158.361 - 3171.521: 92.3389% ( 145) 00:34:01.736 3171.521 - 3184.681: 92.5856% ( 132) 00:34:01.736 3184.681 - 3197.841: 92.8192% ( 125) 00:34:01.736 3197.841 - 3211.001: 93.0435% ( 120) 00:34:01.736 3211.001 - 3224.161: 93.2622% ( 117) 00:34:01.736 3224.161 - 3237.320: 93.4790% ( 116) 00:34:01.736 3237.320 - 3250.480: 93.6846% ( 110) 00:34:01.736 3250.480 - 3263.640: 93.8659% ( 97) 00:34:01.736 3263.640 - 3276.800: 94.0397% ( 93) 00:34:01.736 3276.800 - 3289.960: 94.2023% ( 87) 00:34:01.736 3289.960 - 3303.120: 94.3612% ( 85) 00:34:01.736 3303.120 - 3316.280: 94.5182% ( 84) 00:34:01.736 3316.280 - 3329.439: 94.6658% ( 79) 00:34:01.736 3329.439 - 3342.599: 94.8041% ( 74) 00:34:01.736 3342.599 - 3355.759: 94.9406% ( 73) 00:34:01.736 3355.759 - 3368.919: 95.0621% ( 65) 00:34:01.736 3368.919 - 3395.239: 95.3088% ( 132) 00:34:01.736 3395.239 - 3421.558: 95.5387% ( 123) 00:34:01.736 3421.558 - 3447.878: 95.7442% ( 110) 00:34:01.736 3447.878 - 3474.198: 95.9667% ( 119) 00:34:01.736 3474.198 - 3500.517: 96.1648% ( 106) 00:34:01.736 3500.517 - 3526.837: 96.3666% ( 108) 00:34:01.736 3526.837 - 3553.157: 96.5498% ( 98) 00:34:01.736 3553.157 - 3579.476: 96.7273% ( 95) 00:34:01.736 3579.476 - 3605.796: 96.9012% ( 93) 00:34:01.736 3605.796 - 3632.116: 97.0731% ( 92) 00:34:01.736 3632.116 - 3658.435: 97.2339% ( 86) 00:34:01.736 3658.435 - 3684.755: 97.3778% ( 77) 00:34:01.736 3684.755 - 3711.075: 97.5179% ( 75) 00:34:01.736 3711.075 - 3737.394: 97.6581% ( 75) 00:34:01.736 3737.394 - 3763.714: 97.7871% ( 69) 00:34:01.736 3763.714 - 3790.034: 97.9142% ( 68) 00:34:01.736 3790.034 - 3816.353: 98.0413% ( 68) 00:34:01.736 3816.353 - 3842.673: 98.1609% ( 64) 00:34:01.736 3842.673 - 3868.993: 98.2730% ( 60) 00:34:01.736 3868.993 - 3895.312: 98.3609% ( 47) 00:34:01.736 3895.312 - 3921.632: 98.4543% ( 50) 00:34:01.736 3921.632 - 3947.952: 98.5496% ( 51) 00:34:01.736 3947.952 - 3974.271: 98.6394% ( 48) 00:34:01.736 3974.271 - 4000.591: 98.7272% ( 47) 00:34:01.736 4000.591 - 4026.911: 98.8094% ( 44) 00:34:01.736 4026.911 - 4053.231: 98.8991% ( 48) 00:34:01.736 4053.231 - 4079.550: 98.9776% ( 42) 00:34:01.736 4079.550 - 4105.870: 99.0487% ( 38) 00:34:01.736 4105.870 - 4132.190: 99.1178% ( 37) 00:34:01.736 4132.190 - 4158.509: 99.1907% ( 39) 00:34:01.736 4158.509 - 4184.829: 99.2561% ( 35) 00:34:01.736 4184.829 - 4211.149: 99.3103% ( 29) 00:34:01.736 4211.149 - 4237.468: 99.3589% ( 26) 00:34:01.736 4237.468 - 4263.788: 99.4019% ( 23) 00:34:01.736 4263.788 - 4290.108: 99.4486% ( 25) 00:34:01.736 4290.108 - 4316.427: 99.4916% ( 23) 00:34:01.736 4316.427 - 4342.747: 99.5253% ( 18) 00:34:01.736 4342.747 - 4369.067: 99.5514% ( 14) 00:34:01.736 4369.067 - 4395.386: 99.5776% ( 14) 00:34:01.736 4395.386 - 4421.706: 99.6019% ( 13) 00:34:01.736 4421.706 - 4448.026: 99.6225% ( 11) 00:34:01.736 4448.026 - 4474.345: 99.6430% ( 11) 00:34:01.736 4474.345 - 4500.665: 99.6617% ( 10) 00:34:01.736 4500.665 - 4526.985: 99.6804% ( 10) 00:34:01.736 4526.985 - 4553.304: 99.6935% ( 7) 00:34:01.736 4553.304 - 4579.624: 99.7122% ( 10) 00:34:01.736 4579.624 - 4605.944: 99.7215% ( 5) 00:34:01.736 4605.944 - 4632.263: 99.7271% ( 3) 00:34:01.736 4632.263 - 4658.583: 99.7309% ( 2) 00:34:01.736 4658.583 - 4684.903: 99.7346% ( 2) 00:34:01.736 4684.903 - 4711.222: 99.7383% ( 2) 00:34:01.736 4711.222 - 4737.542: 99.7421% ( 2) 00:34:01.736 4737.542 - 4763.862: 99.7477% ( 3) 00:34:01.736 4763.862 - 4790.182: 99.7514% ( 2) 00:34:01.736 4790.182 - 4816.501: 99.7552% ( 2) 00:34:01.736 4816.501 - 4842.821: 99.7589% ( 2) 00:34:01.736 4842.821 - 4869.141: 99.7626% ( 2) 00:34:01.736 4869.141 - 4895.460: 99.7664% ( 2) 00:34:01.736 4895.460 - 4921.780: 99.7701% ( 2) 00:34:01.736 4921.780 - 4948.100: 99.7720% ( 1) 00:34:01.736 4948.100 - 4974.419: 99.7757% ( 2) 00:34:01.736 4974.419 - 5000.739: 99.7795% ( 2) 00:34:01.736 5000.739 - 5027.059: 99.7832% ( 2) 00:34:01.736 5027.059 - 5053.378: 99.7869% ( 2) 00:34:01.736 5053.378 - 5079.698: 99.7907% ( 2) 00:34:01.736 5079.698 - 5106.018: 99.7944% ( 2) 00:34:01.736 5106.018 - 5132.337: 99.7963% ( 1) 00:34:01.736 5132.337 - 5158.657: 99.7981% ( 1) 00:34:01.736 5184.977 - 5211.296: 99.8000% ( 1) 00:34:01.736 5211.296 - 5237.616: 99.8019% ( 1) 00:34:01.736 5237.616 - 5263.936: 99.8038% ( 1) 00:34:01.736 5263.936 - 5290.255: 99.8056% ( 1) 00:34:01.736 5290.255 - 5316.575: 99.8075% ( 1) 00:34:01.736 5316.575 - 5342.895: 99.8094% ( 1) 00:34:01.736 5342.895 - 5369.214: 99.8112% ( 1) 00:34:01.736 5369.214 - 5395.534: 99.8131% ( 1) 00:34:01.736 5395.534 - 5421.854: 99.8150% ( 1) 00:34:01.736 5421.854 - 5448.173: 99.8168% ( 1) 00:34:01.736 5448.173 - 5474.493: 99.8187% ( 1) 00:34:01.737 5500.813 - 5527.133: 99.8206% ( 1) 00:34:01.737 5527.133 - 5553.452: 99.8224% ( 1) 00:34:01.737 5553.452 - 5579.772: 99.8243% ( 1) 00:34:01.737 5579.772 - 5606.092: 99.8262% ( 1) 00:34:01.737 5606.092 - 5632.411: 99.8281% ( 1) 00:34:01.737 5632.411 - 5658.731: 99.8299% ( 1) 00:34:01.737 5658.731 - 5685.051: 99.8318% ( 1) 00:34:01.737 5711.370 - 5737.690: 99.8355% ( 2) 00:34:01.737 5764.010 - 5790.329: 99.8374% ( 1) 00:34:01.737 5790.329 - 5816.649: 99.8393% ( 1) 00:34:01.737 5816.649 - 5842.969: 99.8411% ( 1) 00:34:01.737 5842.969 - 5869.288: 99.8430% ( 1) 00:34:01.737 5869.288 - 5895.608: 99.8449% ( 1) 00:34:01.737 5895.608 - 5921.928: 99.8467% ( 1) 00:34:01.737 5921.928 - 5948.247: 99.8486% ( 1) 00:34:01.737 5948.247 - 5974.567: 99.8505% ( 1) 00:34:01.737 5974.567 - 6000.887: 99.8523% ( 1) 00:34:01.737 6000.887 - 6027.206: 99.8542% ( 1) 00:34:01.737 6027.206 - 6053.526: 99.8561% ( 1) 00:34:01.737 6053.526 - 6079.846: 99.8580% ( 1) 00:34:01.737 6079.846 - 6106.165: 99.8598% ( 1) 00:34:01.737 6106.165 - 6132.485: 99.8617% ( 1) 00:34:01.737 6132.485 - 6158.805: 99.8636% ( 1) 00:34:01.737 6158.805 - 6185.124: 99.8654% ( 1) 00:34:01.737 6185.124 - 6211.444: 99.8673% ( 1) 00:34:01.737 6211.444 - 6237.764: 99.8692% ( 1) 00:34:01.737 6264.084 - 6290.403: 99.8729% ( 2) 00:34:01.737 6290.403 - 6316.723: 99.8748% ( 1) 00:34:01.737 6343.043 - 6369.362: 99.8766% ( 1) 00:34:01.737 6369.362 - 6395.682: 99.8804% ( 2) 00:34:01.737 6422.002 - 6448.321: 99.8823% ( 1) 00:34:01.737 6448.321 - 6474.641: 99.8841% ( 1) 00:34:01.737 6474.641 - 6500.961: 99.8860% ( 1) 00:34:01.737 6527.280 - 6553.600: 99.8879% ( 1) 00:34:01.737 6553.600 - 6579.920: 99.8897% ( 1) 00:34:01.737 6579.920 - 6606.239: 99.8916% ( 1) 00:34:01.737 6606.239 - 6632.559: 99.8935% ( 1) 00:34:01.737 6632.559 - 6658.879: 99.8953% ( 1) 00:34:01.737 6658.879 - 6685.198: 99.8972% ( 1) 00:34:01.737 6711.518 - 6737.838: 99.8991% ( 1) 00:34:01.737 6737.838 - 6790.477: 99.9028% ( 2) 00:34:01.737 6790.477 - 6843.116: 99.9065% ( 2) 00:34:01.737 6843.116 - 6895.756: 99.9084% ( 1) 00:34:01.737 6895.756 - 6948.395: 99.9122% ( 2) 00:34:01.737 6948.395 - 7001.035: 99.9159% ( 2) 00:34:01.737 7001.035 - 7053.674: 99.9196% ( 2) 00:34:01.737 7053.674 - 7106.313: 99.9234% ( 2) 00:34:01.737 7106.313 - 7158.953: 99.9271% ( 2) 00:34:01.737 7158.953 - 7211.592: 99.9308% ( 2) 00:34:01.737 7211.592 - 7264.231: 99.9346% ( 2) 00:34:01.737 7264.231 - 7316.871: 99.9383% ( 2) 00:34:01.737 7316.871 - 7369.510: 99.9421% ( 2) 00:34:01.737 7369.510 - 7422.149: 99.9458% ( 2) 00:34:01.737 7422.149 - 7474.789: 99.9495% ( 2) 00:34:01.737 7474.789 - 7527.428: 99.9533% ( 2) 00:34:01.737 7527.428 - 7580.067: 99.9570% ( 2) 00:34:01.737 7580.067 - 7632.707: 99.9589% ( 1) 00:34:01.737 7632.707 - 7685.346: 99.9626% ( 2) 00:34:01.737 7685.346 - 7737.986: 99.9664% ( 2) 00:34:01.737 7737.986 - 7790.625: 99.9701% ( 2) 00:34:01.737 7790.625 - 7843.264: 99.9738% ( 2) 00:34:01.737 7843.264 - 7895.904: 99.9757% ( 1) 00:34:01.737 7895.904 - 7948.543: 99.9776% ( 1) 00:34:01.737 7948.543 - 8001.182: 99.9813% ( 2) 00:34:01.737 8001.182 - 8053.822: 99.9850% ( 2) 00:34:01.737 8053.822 - 8106.461: 99.9888% ( 2) 00:34:01.737 8106.461 - 8159.100: 99.9925% ( 2) 00:34:01.737 8159.100 - 8211.740: 99.9963% ( 2) 00:34:01.737 8211.740 - 8264.379: 99.9981% ( 1) 00:34:01.737 8264.379 - 8317.018: 100.0000% ( 1) 00:34:01.737 00:34:01.737 12:53:44 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:34:03.114 Initializing NVMe Controllers 00:34:03.114 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:34:03.114 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:34:03.114 Initialization complete. Launching workers. 00:34:03.114 ======================================================== 00:34:03.114 Latency(us) 00:34:03.114 Device Information : IOPS MiB/s Average min max 00:34:03.114 PCIE (0000:00:06.0) NSID 1 from core 0: 52900.98 619.93 2421.74 755.03 7494.85 00:34:03.114 ======================================================== 00:34:03.114 Total : 52900.98 619.93 2421.74 755.03 7494.85 00:34:03.114 00:34:03.114 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:34:03.114 ================================================================================= 00:34:03.114 1.00000% : 1460.742us 00:34:03.114 10.00000% : 1763.418us 00:34:03.114 25.00000% : 1947.656us 00:34:03.114 50.00000% : 2197.693us 00:34:03.114 75.00000% : 2737.247us 00:34:03.114 90.00000% : 3474.198us 00:34:03.114 95.00000% : 3921.632us 00:34:03.114 98.00000% : 4342.747us 00:34:03.114 99.00000% : 4579.624us 00:34:03.114 99.50000% : 4842.821us 00:34:03.114 99.90000% : 6211.444us 00:34:03.114 99.99000% : 7369.510us 00:34:03.114 99.99900% : 7527.428us 00:34:03.114 99.99990% : 7527.428us 00:34:03.114 99.99999% : 7527.428us 00:34:03.114 00:34:03.114 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:34:03.114 ============================================================================== 00:34:03.114 Range in us Cumulative IO count 00:34:03.114 753.401 - 756.691: 0.0019% ( 1) 00:34:03.114 796.170 - 799.460: 0.0038% ( 1) 00:34:03.114 815.910 - 819.200: 0.0057% ( 1) 00:34:03.114 881.709 - 888.289: 0.0076% ( 1) 00:34:03.114 888.289 - 894.869: 0.0095% ( 1) 00:34:03.114 894.869 - 901.449: 0.0113% ( 1) 00:34:03.114 908.029 - 914.609: 0.0132% ( 1) 00:34:03.114 934.349 - 940.929: 0.0170% ( 2) 00:34:03.114 947.508 - 954.088: 0.0189% ( 1) 00:34:03.114 954.088 - 960.668: 0.0208% ( 1) 00:34:03.114 973.828 - 980.408: 0.0227% ( 1) 00:34:03.114 980.408 - 986.988: 0.0246% ( 1) 00:34:03.114 1013.308 - 1019.888: 0.0265% ( 1) 00:34:03.114 1059.367 - 1065.947: 0.0302% ( 2) 00:34:03.114 1065.947 - 1072.527: 0.0321% ( 1) 00:34:03.114 1072.527 - 1079.107: 0.0340% ( 1) 00:34:03.114 1079.107 - 1085.687: 0.0378% ( 2) 00:34:03.114 1098.847 - 1105.427: 0.0397% ( 1) 00:34:03.114 1105.427 - 1112.006: 0.0416% ( 1) 00:34:03.114 1112.006 - 1118.586: 0.0435% ( 1) 00:34:03.114 1118.586 - 1125.166: 0.0473% ( 2) 00:34:03.114 1125.166 - 1131.746: 0.0491% ( 1) 00:34:03.114 1131.746 - 1138.326: 0.0567% ( 4) 00:34:03.114 1138.326 - 1144.906: 0.0586% ( 1) 00:34:03.114 1144.906 - 1151.486: 0.0624% ( 2) 00:34:03.115 1151.486 - 1158.066: 0.0643% ( 1) 00:34:03.115 1158.066 - 1164.646: 0.0662% ( 1) 00:34:03.115 1171.226 - 1177.806: 0.0718% ( 3) 00:34:03.115 1184.386 - 1190.965: 0.0756% ( 2) 00:34:03.115 1190.965 - 1197.545: 0.0851% ( 5) 00:34:03.115 1197.545 - 1204.125: 0.0869% ( 1) 00:34:03.115 1217.285 - 1223.865: 0.0907% ( 2) 00:34:03.115 1223.865 - 1230.445: 0.0926% ( 1) 00:34:03.115 1230.445 - 1237.025: 0.0983% ( 3) 00:34:03.115 1237.025 - 1243.605: 0.1021% ( 2) 00:34:03.115 1243.605 - 1250.185: 0.1059% ( 2) 00:34:03.115 1250.185 - 1256.765: 0.1077% ( 1) 00:34:03.115 1256.765 - 1263.345: 0.1342% ( 14) 00:34:03.115 1263.345 - 1269.924: 0.1512% ( 9) 00:34:03.115 1276.504 - 1283.084: 0.1531% ( 1) 00:34:03.115 1283.084 - 1289.664: 0.1550% ( 1) 00:34:03.115 1289.664 - 1296.244: 0.1626% ( 4) 00:34:03.115 1296.244 - 1302.824: 0.1852% ( 12) 00:34:03.115 1302.824 - 1309.404: 0.1947% ( 5) 00:34:03.115 1309.404 - 1315.984: 0.1966% ( 1) 00:34:03.115 1315.984 - 1322.564: 0.2098% ( 7) 00:34:03.115 1322.564 - 1329.144: 0.2325% ( 12) 00:34:03.115 1329.144 - 1335.724: 0.3478% ( 61) 00:34:03.115 1335.724 - 1342.304: 0.3686% ( 11) 00:34:03.115 1342.304 - 1348.884: 0.3761% ( 4) 00:34:03.115 1348.884 - 1355.463: 0.3875% ( 6) 00:34:03.115 1355.463 - 1362.043: 0.4026% ( 8) 00:34:03.115 1362.043 - 1368.623: 0.4347% ( 17) 00:34:03.115 1368.623 - 1375.203: 0.4744% ( 21) 00:34:03.115 1375.203 - 1381.783: 0.5066% ( 17) 00:34:03.115 1381.783 - 1388.363: 0.5255% ( 10) 00:34:03.115 1388.363 - 1394.943: 0.5595% ( 18) 00:34:03.115 1394.943 - 1401.523: 0.6918% ( 70) 00:34:03.115 1401.523 - 1408.103: 0.7296% ( 20) 00:34:03.115 1408.103 - 1414.683: 0.7561% ( 14) 00:34:03.115 1414.683 - 1421.263: 0.7844% ( 15) 00:34:03.115 1421.263 - 1427.843: 0.8128% ( 15) 00:34:03.115 1427.843 - 1434.422: 0.8619% ( 26) 00:34:03.115 1434.422 - 1441.002: 0.9167% ( 29) 00:34:03.115 1441.002 - 1447.582: 0.9640% ( 25) 00:34:03.115 1447.582 - 1454.162: 0.9961% ( 17) 00:34:03.115 1454.162 - 1460.742: 1.0245% ( 15) 00:34:03.115 1460.742 - 1467.322: 1.0528% ( 15) 00:34:03.115 1467.322 - 1473.902: 1.0793% ( 14) 00:34:03.115 1473.902 - 1480.482: 1.1076% ( 15) 00:34:03.115 1480.482 - 1487.062: 1.1454% ( 20) 00:34:03.115 1487.062 - 1493.642: 1.1814% ( 19) 00:34:03.115 1493.642 - 1500.222: 1.2267% ( 24) 00:34:03.115 1500.222 - 1506.802: 1.2702% ( 23) 00:34:03.115 1506.802 - 1513.382: 1.3288% ( 31) 00:34:03.115 1513.382 - 1519.961: 1.4139% ( 45) 00:34:03.115 1519.961 - 1526.541: 1.5103% ( 51) 00:34:03.115 1526.541 - 1533.121: 1.5896% ( 42) 00:34:03.115 1533.121 - 1539.701: 1.7579% ( 89) 00:34:03.115 1539.701 - 1546.281: 1.9563% ( 105) 00:34:03.115 1546.281 - 1552.861: 2.1170% ( 85) 00:34:03.115 1552.861 - 1559.441: 2.2758% ( 84) 00:34:03.115 1559.441 - 1566.021: 2.4100% ( 71) 00:34:03.115 1566.021 - 1572.601: 2.5914% ( 96) 00:34:03.115 1572.601 - 1579.181: 2.7389% ( 78) 00:34:03.115 1579.181 - 1585.761: 2.8769% ( 73) 00:34:03.115 1585.761 - 1592.341: 3.0167% ( 74) 00:34:03.115 1592.341 - 1598.920: 3.2360% ( 116) 00:34:03.115 1598.920 - 1605.500: 3.3967% ( 85) 00:34:03.115 1605.500 - 1612.080: 3.6556% ( 137) 00:34:03.115 1612.080 - 1618.660: 3.8692% ( 113) 00:34:03.115 1618.660 - 1625.240: 4.1017% ( 123) 00:34:03.115 1625.240 - 1631.820: 4.3909% ( 153) 00:34:03.115 1631.820 - 1638.400: 4.6669% ( 146) 00:34:03.115 1638.400 - 1644.980: 4.8748% ( 110) 00:34:03.115 1644.980 - 1651.560: 5.1659% ( 154) 00:34:03.115 1651.560 - 1658.140: 5.3492% ( 97) 00:34:03.115 1658.140 - 1664.720: 5.6044% ( 135) 00:34:03.115 1664.720 - 1671.300: 5.7764% ( 91) 00:34:03.115 1671.300 - 1677.880: 6.0486% ( 144) 00:34:03.115 1677.880 - 1684.459: 6.4380% ( 206) 00:34:03.115 1684.459 - 1697.619: 7.0731% ( 336) 00:34:03.115 1697.619 - 1710.779: 7.6779% ( 320) 00:34:03.115 1710.779 - 1723.939: 8.2866% ( 322) 00:34:03.115 1723.939 - 1737.099: 9.0237% ( 390) 00:34:03.115 1737.099 - 1750.259: 9.8535% ( 439) 00:34:03.115 1750.259 - 1763.418: 10.6606% ( 427) 00:34:03.115 1763.418 - 1776.578: 11.4753% ( 431) 00:34:03.115 1776.578 - 1789.738: 12.2635% ( 417) 00:34:03.115 1789.738 - 1802.898: 13.1802% ( 485) 00:34:03.115 1802.898 - 1816.058: 14.1007% ( 487) 00:34:03.115 1816.058 - 1829.218: 15.0496% ( 502) 00:34:03.115 1829.218 - 1842.378: 16.0420% ( 525) 00:34:03.115 1842.378 - 1855.537: 17.1042% ( 562) 00:34:03.115 1855.537 - 1868.697: 18.3763% ( 673) 00:34:03.115 1868.697 - 1881.857: 19.5464% ( 619) 00:34:03.115 1881.857 - 1895.017: 21.1681% ( 858) 00:34:03.115 1895.017 - 1908.177: 22.4780% ( 693) 00:34:03.115 1908.177 - 1921.337: 23.6235% ( 606) 00:34:03.115 1921.337 - 1934.496: 24.8956% ( 673) 00:34:03.115 1934.496 - 1947.656: 26.2943% ( 740) 00:34:03.115 1947.656 - 1960.816: 27.4681% ( 621) 00:34:03.115 1960.816 - 1973.976: 28.6986% ( 651) 00:34:03.115 1973.976 - 1987.136: 30.1522% ( 769) 00:34:03.115 1987.136 - 2000.296: 31.3449% ( 631) 00:34:03.115 2000.296 - 2013.455: 32.4544% ( 587) 00:34:03.115 2013.455 - 2026.615: 33.7737% ( 698) 00:34:03.115 2026.615 - 2039.775: 35.4144% ( 868) 00:34:03.115 2039.775 - 2052.935: 36.8226% ( 745) 00:34:03.115 2052.935 - 2066.095: 38.1760% ( 716) 00:34:03.115 2066.095 - 2079.255: 39.4632% ( 681) 00:34:03.115 2079.255 - 2092.414: 40.9829% ( 804) 00:34:03.115 2092.414 - 2105.574: 42.3892% ( 744) 00:34:03.115 2105.574 - 2118.734: 43.7407% ( 715) 00:34:03.115 2118.734 - 2131.894: 44.8653% ( 595) 00:34:03.115 2131.894 - 2145.054: 46.0599% ( 632) 00:34:03.115 2145.054 - 2158.214: 47.2488% ( 629) 00:34:03.115 2158.214 - 2171.373: 48.3036% ( 558) 00:34:03.115 2171.373 - 2184.533: 49.3394% ( 548) 00:34:03.115 2184.533 - 2197.693: 50.3639% ( 542) 00:34:03.115 2197.693 - 2210.853: 51.3638% ( 529) 00:34:03.115 2210.853 - 2224.013: 52.2729% ( 481) 00:34:03.115 2224.013 - 2237.173: 53.1160% ( 446) 00:34:03.115 2237.173 - 2250.333: 53.8758% ( 402) 00:34:03.115 2250.333 - 2263.492: 54.6886% ( 430) 00:34:03.115 2263.492 - 2276.652: 55.5070% ( 433) 00:34:03.115 2276.652 - 2289.812: 56.2556% ( 396) 00:34:03.115 2289.812 - 2302.972: 56.9984% ( 393) 00:34:03.115 2302.972 - 2316.132: 57.6033% ( 320) 00:34:03.115 2316.132 - 2329.292: 58.4935% ( 471) 00:34:03.115 2329.292 - 2342.451: 59.3895% ( 474) 00:34:03.115 2342.451 - 2355.611: 60.0302% ( 339) 00:34:03.115 2355.611 - 2368.771: 60.6691% ( 338) 00:34:03.115 2368.771 - 2381.931: 61.2721% ( 319) 00:34:03.115 2381.931 - 2395.091: 61.8788% ( 321) 00:34:03.115 2395.091 - 2408.251: 62.7086% ( 439) 00:34:03.115 2408.251 - 2421.410: 63.3229% ( 325) 00:34:03.115 2421.410 - 2434.570: 63.9750% ( 345) 00:34:03.115 2434.570 - 2447.730: 64.5024% ( 279) 00:34:03.115 2447.730 - 2460.890: 65.0600% ( 295) 00:34:03.115 2460.890 - 2474.050: 65.6554% ( 315) 00:34:03.115 2474.050 - 2487.210: 66.2603% ( 320) 00:34:03.115 2487.210 - 2500.369: 66.9332% ( 356) 00:34:03.115 2500.369 - 2513.529: 67.4738% ( 286) 00:34:03.115 2513.529 - 2526.689: 68.0522% ( 306) 00:34:03.115 2526.689 - 2539.849: 68.5531% ( 265) 00:34:03.115 2539.849 - 2553.009: 69.0653% ( 271) 00:34:03.115 2553.009 - 2566.169: 69.6267% ( 297) 00:34:03.115 2566.169 - 2579.329: 70.1389% ( 271) 00:34:03.115 2579.329 - 2592.488: 70.5340% ( 209) 00:34:03.115 2592.488 - 2605.648: 70.9498% ( 220) 00:34:03.115 2605.648 - 2618.808: 71.3732% ( 224) 00:34:03.115 2618.808 - 2631.968: 71.7513% ( 200) 00:34:03.115 2631.968 - 2645.128: 72.1709% ( 222) 00:34:03.115 2645.128 - 2658.288: 72.6585% ( 258) 00:34:03.115 2658.288 - 2671.447: 73.1500% ( 260) 00:34:03.115 2671.447 - 2684.607: 73.5450% ( 209) 00:34:03.115 2684.607 - 2697.767: 73.9098% ( 193) 00:34:03.115 2697.767 - 2710.927: 74.3030% ( 208) 00:34:03.115 2710.927 - 2724.087: 74.6583% ( 188) 00:34:03.115 2724.087 - 2737.247: 75.0364% ( 200) 00:34:03.115 2737.247 - 2750.406: 75.3369% ( 159) 00:34:03.115 2750.406 - 2763.566: 75.6356% ( 158) 00:34:03.115 2763.566 - 2776.726: 75.9267% ( 154) 00:34:03.115 2776.726 - 2789.886: 76.2499% ( 171) 00:34:03.115 2789.886 - 2803.046: 76.5618% ( 165) 00:34:03.115 2803.046 - 2816.206: 76.9114% ( 185) 00:34:03.115 2816.206 - 2829.365: 77.2517% ( 180) 00:34:03.115 2829.365 - 2842.525: 77.5636% ( 165) 00:34:03.115 2842.525 - 2855.685: 77.9340% ( 196) 00:34:03.115 2855.685 - 2868.845: 78.2497% ( 167) 00:34:03.115 2868.845 - 2882.005: 78.5994% ( 185) 00:34:03.115 2882.005 - 2895.165: 78.9075% ( 163) 00:34:03.115 2895.165 - 2908.324: 79.2345% ( 173) 00:34:03.115 2908.324 - 2921.484: 79.5331% ( 158) 00:34:03.115 2921.484 - 2934.644: 79.8734% ( 180) 00:34:03.115 2934.644 - 2947.804: 80.2041% ( 175) 00:34:03.115 2947.804 - 2960.964: 80.5274% ( 171) 00:34:03.115 2960.964 - 2974.124: 80.8997% ( 197) 00:34:03.115 2974.124 - 2987.284: 81.2286% ( 174) 00:34:03.115 2987.284 - 3000.443: 81.5745% ( 183) 00:34:03.115 3000.443 - 3013.603: 81.9355% ( 191) 00:34:03.115 3013.603 - 3026.763: 82.2777% ( 181) 00:34:03.115 3026.763 - 3039.923: 82.5688% ( 154) 00:34:03.115 3039.923 - 3053.083: 82.8598% ( 154) 00:34:03.115 3053.083 - 3066.243: 83.1377% ( 147) 00:34:03.115 3066.243 - 3079.402: 83.4193% ( 149) 00:34:03.115 3079.402 - 3092.562: 83.6840% ( 140) 00:34:03.115 3092.562 - 3105.722: 83.9297% ( 130) 00:34:03.115 3105.722 - 3118.882: 84.1584% ( 121) 00:34:03.115 3118.882 - 3132.042: 84.4155% ( 136) 00:34:03.115 3132.042 - 3145.202: 84.6612% ( 130) 00:34:03.115 3145.202 - 3158.361: 84.9258% ( 140) 00:34:03.115 3158.361 - 3171.521: 85.2037% ( 147) 00:34:03.115 3171.521 - 3184.681: 85.4664% ( 139) 00:34:03.115 3184.681 - 3197.841: 85.7783% ( 165) 00:34:03.115 3197.841 - 3211.001: 85.9994% ( 117) 00:34:03.116 3211.001 - 3224.161: 86.2508% ( 133) 00:34:03.116 3224.161 - 3237.320: 86.5287% ( 147) 00:34:03.116 3237.320 - 3250.480: 86.7933% ( 140) 00:34:03.116 3250.480 - 3263.640: 87.1222% ( 174) 00:34:03.116 3263.640 - 3276.800: 87.3112% ( 100) 00:34:03.116 3276.800 - 3289.960: 87.5248% ( 113) 00:34:03.116 3289.960 - 3303.120: 87.7252% ( 106) 00:34:03.116 3303.120 - 3316.280: 87.8953% ( 90) 00:34:03.116 3316.280 - 3329.439: 88.0767% ( 96) 00:34:03.116 3329.439 - 3342.599: 88.2733% ( 104) 00:34:03.116 3342.599 - 3355.759: 88.4775% ( 108) 00:34:03.116 3355.759 - 3368.919: 88.6740% ( 104) 00:34:03.116 3368.919 - 3395.239: 89.0955% ( 223) 00:34:03.116 3395.239 - 3421.558: 89.4755% ( 201) 00:34:03.116 3421.558 - 3447.878: 89.8365% ( 191) 00:34:03.116 3447.878 - 3474.198: 90.1881% ( 186) 00:34:03.116 3474.198 - 3500.517: 90.5585% ( 196) 00:34:03.116 3500.517 - 3526.837: 90.9063% ( 184) 00:34:03.116 3526.837 - 3553.157: 91.2333% ( 173) 00:34:03.116 3553.157 - 3579.476: 91.5660% ( 176) 00:34:03.116 3579.476 - 3605.796: 91.9100% ( 182) 00:34:03.116 3605.796 - 3632.116: 92.2446% ( 177) 00:34:03.116 3632.116 - 3658.435: 92.5810% ( 178) 00:34:03.116 3658.435 - 3684.755: 92.8986% ( 168) 00:34:03.116 3684.755 - 3711.075: 93.1651% ( 141) 00:34:03.116 3711.075 - 3737.394: 93.4278% ( 139) 00:34:03.116 3737.394 - 3763.714: 93.6698% ( 128) 00:34:03.116 3763.714 - 3790.034: 93.9269% ( 136) 00:34:03.116 3790.034 - 3816.353: 94.1669% ( 127) 00:34:03.116 3816.353 - 3842.673: 94.3975% ( 122) 00:34:03.116 3842.673 - 3868.993: 94.6224% ( 119) 00:34:03.116 3868.993 - 3895.312: 94.8625% ( 127) 00:34:03.116 3895.312 - 3921.632: 95.0836% ( 117) 00:34:03.116 3921.632 - 3947.952: 95.3124% ( 121) 00:34:03.116 3947.952 - 3974.271: 95.5278% ( 114) 00:34:03.116 3974.271 - 4000.591: 95.7414% ( 113) 00:34:03.116 4000.591 - 4026.911: 95.9512% ( 111) 00:34:03.116 4026.911 - 4053.231: 96.1610% ( 111) 00:34:03.116 4053.231 - 4079.550: 96.3501% ( 100) 00:34:03.116 4079.550 - 4105.870: 96.5410% ( 101) 00:34:03.116 4105.870 - 4132.190: 96.7338% ( 102) 00:34:03.116 4132.190 - 4158.509: 96.9209% ( 99) 00:34:03.116 4158.509 - 4184.829: 97.0910% ( 90) 00:34:03.116 4184.829 - 4211.149: 97.2592% ( 89) 00:34:03.116 4211.149 - 4237.468: 97.4180% ( 84) 00:34:03.116 4237.468 - 4263.788: 97.5825% ( 87) 00:34:03.116 4263.788 - 4290.108: 97.7431% ( 85) 00:34:03.116 4290.108 - 4316.427: 97.8943% ( 80) 00:34:03.116 4316.427 - 4342.747: 98.0380% ( 76) 00:34:03.116 4342.747 - 4369.067: 98.1779% ( 74) 00:34:03.116 4369.067 - 4395.386: 98.3177% ( 74) 00:34:03.116 4395.386 - 4421.706: 98.4501% ( 70) 00:34:03.116 4421.706 - 4448.026: 98.5672% ( 62) 00:34:03.116 4448.026 - 4474.345: 98.6731% ( 56) 00:34:03.116 4474.345 - 4500.665: 98.7676% ( 50) 00:34:03.116 4500.665 - 4526.985: 98.8546% ( 46) 00:34:03.116 4526.985 - 4553.304: 98.9396% ( 45) 00:34:03.116 4553.304 - 4579.624: 99.0152% ( 40) 00:34:03.116 4579.624 - 4605.944: 99.0833% ( 36) 00:34:03.116 4605.944 - 4632.263: 99.1475% ( 34) 00:34:03.116 4632.263 - 4658.583: 99.2061% ( 31) 00:34:03.116 4658.583 - 4684.903: 99.2609% ( 29) 00:34:03.116 4684.903 - 4711.222: 99.3195% ( 31) 00:34:03.116 4711.222 - 4737.542: 99.3649% ( 24) 00:34:03.116 4737.542 - 4763.862: 99.4103% ( 24) 00:34:03.116 4763.862 - 4790.182: 99.4500% ( 21) 00:34:03.116 4790.182 - 4816.501: 99.4859% ( 19) 00:34:03.116 4816.501 - 4842.821: 99.5142% ( 15) 00:34:03.116 4842.821 - 4869.141: 99.5426% ( 15) 00:34:03.116 4869.141 - 4895.460: 99.5747% ( 17) 00:34:03.116 4895.460 - 4921.780: 99.6050% ( 16) 00:34:03.116 4921.780 - 4948.100: 99.6314% ( 14) 00:34:03.116 4948.100 - 4974.419: 99.6560% ( 13) 00:34:03.116 4974.419 - 5000.739: 99.6711% ( 8) 00:34:03.116 5000.739 - 5027.059: 99.6938% ( 12) 00:34:03.116 5027.059 - 5053.378: 99.7108% ( 9) 00:34:03.116 5053.378 - 5079.698: 99.7259% ( 8) 00:34:03.116 5079.698 - 5106.018: 99.7410% ( 8) 00:34:03.116 5106.018 - 5132.337: 99.7543% ( 7) 00:34:03.116 5132.337 - 5158.657: 99.7675% ( 7) 00:34:03.116 5158.657 - 5184.977: 99.7788% ( 6) 00:34:03.116 5184.977 - 5211.296: 99.7883% ( 5) 00:34:03.116 5211.296 - 5237.616: 99.7978% ( 5) 00:34:03.116 5237.616 - 5263.936: 99.8072% ( 5) 00:34:03.116 5263.936 - 5290.255: 99.8148% ( 4) 00:34:03.116 5290.255 - 5316.575: 99.8204% ( 3) 00:34:03.116 5316.575 - 5342.895: 99.8280% ( 4) 00:34:03.116 5342.895 - 5369.214: 99.8337% ( 3) 00:34:03.116 5369.214 - 5395.534: 99.8374% ( 2) 00:34:03.116 5395.534 - 5421.854: 99.8393% ( 1) 00:34:03.116 5421.854 - 5448.173: 99.8412% ( 1) 00:34:03.116 5448.173 - 5474.493: 99.8431% ( 1) 00:34:03.116 5474.493 - 5500.813: 99.8450% ( 1) 00:34:03.116 5500.813 - 5527.133: 99.8469% ( 1) 00:34:03.116 5527.133 - 5553.452: 99.8488% ( 1) 00:34:03.116 5579.772 - 5606.092: 99.8507% ( 1) 00:34:03.116 5606.092 - 5632.411: 99.8526% ( 1) 00:34:03.116 5632.411 - 5658.731: 99.8545% ( 1) 00:34:03.116 5658.731 - 5685.051: 99.8582% ( 2) 00:34:03.116 5685.051 - 5711.370: 99.8601% ( 1) 00:34:03.116 5711.370 - 5737.690: 99.8620% ( 1) 00:34:03.116 5737.690 - 5764.010: 99.8658% ( 2) 00:34:03.116 5764.010 - 5790.329: 99.8677% ( 1) 00:34:03.116 5790.329 - 5816.649: 99.8715% ( 2) 00:34:03.116 5816.649 - 5842.969: 99.8752% ( 2) 00:34:03.116 5869.288 - 5895.608: 99.8771% ( 1) 00:34:03.116 5895.608 - 5921.928: 99.8790% ( 1) 00:34:03.116 5921.928 - 5948.247: 99.8809% ( 1) 00:34:03.116 5948.247 - 5974.567: 99.8828% ( 1) 00:34:03.116 5974.567 - 6000.887: 99.8866% ( 2) 00:34:03.116 6000.887 - 6027.206: 99.8885% ( 1) 00:34:03.116 6027.206 - 6053.526: 99.8904% ( 1) 00:34:03.116 6053.526 - 6079.846: 99.8923% ( 1) 00:34:03.116 6079.846 - 6106.165: 99.8941% ( 1) 00:34:03.116 6106.165 - 6132.485: 99.8960% ( 1) 00:34:03.116 6132.485 - 6158.805: 99.8979% ( 1) 00:34:03.116 6158.805 - 6185.124: 99.8998% ( 1) 00:34:03.116 6185.124 - 6211.444: 99.9017% ( 1) 00:34:03.116 6211.444 - 6237.764: 99.9036% ( 1) 00:34:03.116 6237.764 - 6264.084: 99.9055% ( 1) 00:34:03.116 6264.084 - 6290.403: 99.9074% ( 1) 00:34:03.116 6290.403 - 6316.723: 99.9093% ( 1) 00:34:03.116 6316.723 - 6343.043: 99.9112% ( 1) 00:34:03.116 6343.043 - 6369.362: 99.9168% ( 3) 00:34:03.116 6369.362 - 6395.682: 99.9187% ( 1) 00:34:03.116 6395.682 - 6422.002: 99.9206% ( 1) 00:34:03.116 6448.321 - 6474.641: 99.9244% ( 2) 00:34:03.116 6474.641 - 6500.961: 99.9263% ( 1) 00:34:03.116 6500.961 - 6527.280: 99.9301% ( 2) 00:34:03.116 6527.280 - 6553.600: 99.9320% ( 1) 00:34:03.116 6553.600 - 6579.920: 99.9338% ( 1) 00:34:03.116 6579.920 - 6606.239: 99.9357% ( 1) 00:34:03.116 6606.239 - 6632.559: 99.9376% ( 1) 00:34:03.116 6632.559 - 6658.879: 99.9395% ( 1) 00:34:03.116 6658.879 - 6685.198: 99.9414% ( 1) 00:34:03.116 6685.198 - 6711.518: 99.9433% ( 1) 00:34:03.116 6711.518 - 6737.838: 99.9452% ( 1) 00:34:03.116 6737.838 - 6790.477: 99.9490% ( 2) 00:34:03.116 6790.477 - 6843.116: 99.9527% ( 2) 00:34:03.116 6843.116 - 6895.756: 99.9565% ( 2) 00:34:03.116 6895.756 - 6948.395: 99.9603% ( 2) 00:34:03.116 6948.395 - 7001.035: 99.9660% ( 3) 00:34:03.116 7001.035 - 7053.674: 99.9698% ( 2) 00:34:03.116 7053.674 - 7106.313: 99.9735% ( 2) 00:34:03.116 7106.313 - 7158.953: 99.9773% ( 2) 00:34:03.116 7158.953 - 7211.592: 99.9792% ( 1) 00:34:03.116 7211.592 - 7264.231: 99.9849% ( 3) 00:34:03.116 7264.231 - 7316.871: 99.9887% ( 2) 00:34:03.116 7316.871 - 7369.510: 99.9924% ( 2) 00:34:03.116 7369.510 - 7422.149: 99.9962% ( 2) 00:34:03.116 7422.149 - 7474.789: 99.9981% ( 1) 00:34:03.116 7474.789 - 7527.428: 100.0000% ( 1) 00:34:03.116 00:34:03.116 12:53:45 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:34:03.116 00:34:03.116 real 0m2.746s 00:34:03.116 user 0m2.266s 00:34:03.116 sys 0m0.340s 00:34:03.116 12:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:03.116 12:53:45 -- common/autotest_common.sh@10 -- # set +x 00:34:03.116 ************************************ 00:34:03.116 END TEST nvme_perf 00:34:03.116 ************************************ 00:34:03.116 12:53:45 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:34:03.116 12:53:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:34:03.116 12:53:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:03.116 12:53:45 -- common/autotest_common.sh@10 -- # set +x 00:34:03.116 ************************************ 00:34:03.116 START TEST nvme_hello_world 00:34:03.116 ************************************ 00:34:03.116 12:53:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:34:03.376 Initializing NVMe Controllers 00:34:03.376 Attached to 0000:00:06.0 00:34:03.376 Namespace ID: 1 size: 5GB 00:34:03.376 Initialization complete. 00:34:03.376 INFO: using host memory buffer for IO 00:34:03.376 Hello world! 00:34:03.634 00:34:03.634 real 0m0.358s 00:34:03.634 user 0m0.115s 00:34:03.634 sys 0m0.177s 00:34:03.634 12:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:03.634 ************************************ 00:34:03.634 12:53:45 -- common/autotest_common.sh@10 -- # set +x 00:34:03.634 END TEST nvme_hello_world 00:34:03.634 ************************************ 00:34:03.634 12:53:45 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:34:03.634 12:53:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:03.634 12:53:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:03.634 12:53:45 -- common/autotest_common.sh@10 -- # set +x 00:34:03.634 ************************************ 00:34:03.634 START TEST nvme_sgl 00:34:03.634 ************************************ 00:34:03.634 12:53:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:34:03.893 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:34:03.893 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:34:03.893 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:34:03.893 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:34:03.893 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:34:03.893 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:34:04.151 NVMe Readv/Writev Request test 00:34:04.151 Attached to 0000:00:06.0 00:34:04.151 0000:00:06.0: build_io_request_2 test passed 00:34:04.151 0000:00:06.0: build_io_request_4 test passed 00:34:04.151 0000:00:06.0: build_io_request_5 test passed 00:34:04.151 0000:00:06.0: build_io_request_6 test passed 00:34:04.151 0000:00:06.0: build_io_request_7 test passed 00:34:04.151 0000:00:06.0: build_io_request_10 test passed 00:34:04.151 Cleaning up... 00:34:04.151 00:34:04.151 real 0m0.469s 00:34:04.151 user 0m0.216s 00:34:04.151 sys 0m0.177s 00:34:04.151 12:53:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.151 12:53:46 -- common/autotest_common.sh@10 -- # set +x 00:34:04.151 ************************************ 00:34:04.152 END TEST nvme_sgl 00:34:04.152 ************************************ 00:34:04.152 12:53:46 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:34:04.152 12:53:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:04.152 12:53:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:04.152 12:53:46 -- common/autotest_common.sh@10 -- # set +x 00:34:04.152 ************************************ 00:34:04.152 START TEST nvme_e2edp 00:34:04.152 ************************************ 00:34:04.152 12:53:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:34:04.411 NVMe Write/Read with End-to-End data protection test 00:34:04.411 Attached to 0000:00:06.0 00:34:04.411 Cleaning up... 00:34:04.411 00:34:04.411 real 0m0.336s 00:34:04.411 user 0m0.105s 00:34:04.411 sys 0m0.160s 00:34:04.411 12:53:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.411 12:53:46 -- common/autotest_common.sh@10 -- # set +x 00:34:04.411 ************************************ 00:34:04.411 END TEST nvme_e2edp 00:34:04.411 ************************************ 00:34:04.670 12:53:46 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:34:04.670 12:53:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:04.670 12:53:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:04.670 12:53:46 -- common/autotest_common.sh@10 -- # set +x 00:34:04.670 ************************************ 00:34:04.670 START TEST nvme_reserve 00:34:04.670 ************************************ 00:34:04.670 12:53:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:34:04.929 ===================================================== 00:34:04.929 NVMe Controller at PCI bus 0, device 6, function 0 00:34:04.929 ===================================================== 00:34:04.929 Reservations: Not Supported 00:34:04.929 Reservation test passed 00:34:04.929 00:34:04.929 real 0m0.352s 00:34:04.929 user 0m0.107s 00:34:04.929 sys 0m0.179s 00:34:04.929 12:53:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.929 12:53:47 -- common/autotest_common.sh@10 -- # set +x 00:34:04.929 ************************************ 00:34:04.929 END TEST nvme_reserve 00:34:04.929 ************************************ 00:34:04.929 12:53:47 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:34:04.929 12:53:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:04.929 12:53:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:04.929 12:53:47 -- common/autotest_common.sh@10 -- # set +x 00:34:04.929 ************************************ 00:34:04.929 START TEST nvme_err_injection 00:34:04.929 ************************************ 00:34:04.929 12:53:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:34:05.498 NVMe Error Injection test 00:34:05.498 Attached to 0000:00:06.0 00:34:05.498 0000:00:06.0: get features failed as expected 00:34:05.498 0000:00:06.0: get features successfully as expected 00:34:05.498 0000:00:06.0: read failed as expected 00:34:05.498 0000:00:06.0: read successfully as expected 00:34:05.498 Cleaning up... 00:34:05.498 00:34:05.498 real 0m0.361s 00:34:05.498 user 0m0.117s 00:34:05.498 sys 0m0.170s 00:34:05.498 12:53:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:05.498 12:53:47 -- common/autotest_common.sh@10 -- # set +x 00:34:05.498 ************************************ 00:34:05.498 END TEST nvme_err_injection 00:34:05.498 ************************************ 00:34:05.498 12:53:47 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:34:05.498 12:53:47 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:34:05.498 12:53:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:05.498 12:53:47 -- common/autotest_common.sh@10 -- # set +x 00:34:05.498 ************************************ 00:34:05.498 START TEST nvme_overhead 00:34:05.498 ************************************ 00:34:05.498 12:53:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:34:06.930 Initializing NVMe Controllers 00:34:06.930 Attached to 0000:00:06.0 00:34:06.930 Initialization complete. Launching workers. 00:34:06.930 submit (in ns) avg, min, max = 13805.7, 11533.3, 49711.6 00:34:06.930 complete (in ns) avg, min, max = 7989.7, 7563.9, 63405.6 00:34:06.930 00:34:06.930 Submit histogram 00:34:06.930 ================ 00:34:06.930 Range in us Cumulative Count 00:34:06.930 11.515 - 11.566: 0.0152% ( 1) 00:34:06.930 11.772 - 11.823: 0.0304% ( 1) 00:34:06.930 11.875 - 11.926: 0.0457% ( 1) 00:34:06.930 12.132 - 12.183: 0.0609% ( 1) 00:34:06.930 12.183 - 12.235: 0.0761% ( 1) 00:34:06.930 12.235 - 12.286: 0.0913% ( 1) 00:34:06.930 12.286 - 12.337: 0.1065% ( 1) 00:34:06.930 12.389 - 12.440: 0.1522% ( 3) 00:34:06.930 12.440 - 12.492: 0.2739% ( 8) 00:34:06.930 12.492 - 12.543: 0.4109% ( 9) 00:34:06.930 12.543 - 12.594: 0.6392% ( 15) 00:34:06.930 12.594 - 12.646: 0.9588% ( 21) 00:34:06.930 12.646 - 12.697: 1.4762% ( 34) 00:34:06.930 12.697 - 12.749: 2.1306% ( 43) 00:34:06.930 12.749 - 12.800: 2.9828% ( 56) 00:34:06.930 12.800 - 12.851: 3.5459% ( 37) 00:34:06.930 12.851 - 12.903: 4.1242% ( 38) 00:34:06.930 12.903 - 12.954: 4.6112% ( 32) 00:34:06.930 12.954 - 13.006: 5.1590% ( 36) 00:34:06.930 13.006 - 13.057: 5.7982% ( 42) 00:34:06.930 13.057 - 13.108: 6.6200% ( 54) 00:34:06.930 13.108 - 13.160: 7.4874% ( 57) 00:34:06.930 13.160 - 13.263: 10.3637% ( 189) 00:34:06.930 13.263 - 13.365: 16.2532% ( 387) 00:34:06.930 13.365 - 13.468: 23.7255% ( 491) 00:34:06.930 13.468 - 13.571: 34.8197% ( 729) 00:34:06.930 13.571 - 13.674: 46.7509% ( 784) 00:34:06.930 13.674 - 13.777: 58.5908% ( 778) 00:34:06.930 13.777 - 13.880: 68.9849% ( 683) 00:34:06.930 13.880 - 13.982: 78.3442% ( 615) 00:34:06.930 13.982 - 14.085: 85.5730% ( 475) 00:34:06.930 14.085 - 14.188: 90.3515% ( 314) 00:34:06.930 14.188 - 14.291: 93.3648% ( 198) 00:34:06.930 14.291 - 14.394: 95.2366% ( 123) 00:34:06.930 14.394 - 14.496: 96.5302% ( 85) 00:34:06.930 14.496 - 14.599: 97.3672% ( 55) 00:34:06.930 14.599 - 14.702: 97.9303% ( 37) 00:34:06.930 14.702 - 14.805: 98.2347% ( 20) 00:34:06.930 14.805 - 14.908: 98.3869% ( 10) 00:34:06.931 14.908 - 15.010: 98.4173% ( 2) 00:34:06.931 15.010 - 15.113: 98.5086% ( 6) 00:34:06.931 15.113 - 15.216: 98.5238% ( 1) 00:34:06.931 15.216 - 15.319: 98.5695% ( 3) 00:34:06.931 15.319 - 15.422: 98.6151% ( 3) 00:34:06.931 15.627 - 15.730: 98.6303% ( 1) 00:34:06.931 15.833 - 15.936: 98.6456% ( 1) 00:34:06.931 15.936 - 16.039: 98.6608% ( 1) 00:34:06.931 16.347 - 16.450: 98.6760% ( 1) 00:34:06.931 16.553 - 16.655: 98.7064% ( 2) 00:34:06.931 16.861 - 16.964: 98.7217% ( 1) 00:34:06.931 17.067 - 17.169: 98.7673% ( 3) 00:34:06.931 17.169 - 17.272: 98.7977% ( 2) 00:34:06.931 17.272 - 17.375: 98.8130% ( 1) 00:34:06.931 17.375 - 17.478: 98.8586% ( 3) 00:34:06.931 17.581 - 17.684: 98.8738% ( 1) 00:34:06.931 17.786 - 17.889: 98.8891% ( 1) 00:34:06.931 17.992 - 18.095: 98.9195% ( 2) 00:34:06.931 18.095 - 18.198: 98.9347% ( 1) 00:34:06.931 18.198 - 18.300: 98.9499% ( 1) 00:34:06.931 18.300 - 18.403: 98.9956% ( 3) 00:34:06.931 18.506 - 18.609: 99.0260% ( 2) 00:34:06.931 18.609 - 18.712: 99.0717% ( 3) 00:34:06.931 18.712 - 18.814: 99.1021% ( 2) 00:34:06.931 18.917 - 19.020: 99.1478% ( 3) 00:34:06.931 19.226 - 19.329: 99.2239% ( 5) 00:34:06.931 19.329 - 19.431: 99.2847% ( 4) 00:34:06.931 19.431 - 19.534: 99.3152% ( 2) 00:34:06.931 19.534 - 19.637: 99.3456% ( 2) 00:34:06.931 19.637 - 19.740: 99.3608% ( 1) 00:34:06.931 19.843 - 19.945: 99.3913% ( 2) 00:34:06.931 19.945 - 20.048: 99.4217% ( 2) 00:34:06.931 20.254 - 20.357: 99.4369% ( 1) 00:34:06.931 20.459 - 20.562: 99.4521% ( 1) 00:34:06.931 20.562 - 20.665: 99.4826% ( 2) 00:34:06.931 20.768 - 20.871: 99.4978% ( 1) 00:34:06.931 20.871 - 20.973: 99.5130% ( 1) 00:34:06.931 20.973 - 21.076: 99.5282% ( 1) 00:34:06.931 21.076 - 21.179: 99.5434% ( 1) 00:34:06.931 21.179 - 21.282: 99.5587% ( 1) 00:34:06.931 21.796 - 21.899: 99.5739% ( 1) 00:34:06.931 22.104 - 22.207: 99.5891% ( 1) 00:34:06.931 22.310 - 22.413: 99.6043% ( 1) 00:34:06.931 24.983 - 25.086: 99.6195% ( 1) 00:34:06.931 25.189 - 25.292: 99.6348% ( 1) 00:34:06.931 25.394 - 25.497: 99.6500% ( 1) 00:34:06.931 25.600 - 25.703: 99.6804% ( 2) 00:34:06.931 26.217 - 26.320: 99.7109% ( 2) 00:34:06.931 26.320 - 26.525: 99.7261% ( 1) 00:34:06.931 26.525 - 26.731: 99.7717% ( 3) 00:34:06.931 26.731 - 26.937: 99.7869% ( 1) 00:34:06.931 27.142 - 27.348: 99.8022% ( 1) 00:34:06.931 28.170 - 28.376: 99.8174% ( 1) 00:34:06.931 28.582 - 28.787: 99.8326% ( 1) 00:34:06.931 29.198 - 29.404: 99.8478% ( 1) 00:34:06.931 29.610 - 29.815: 99.8630% ( 1) 00:34:06.931 29.815 - 30.021: 99.8783% ( 1) 00:34:06.931 30.227 - 30.432: 99.8935% ( 1) 00:34:06.931 30.432 - 30.638: 99.9087% ( 1) 00:34:06.931 30.638 - 30.843: 99.9239% ( 1) 00:34:06.931 30.843 - 31.049: 99.9391% ( 1) 00:34:06.931 34.545 - 34.750: 99.9543% ( 1) 00:34:06.931 35.367 - 35.573: 99.9696% ( 1) 00:34:06.931 49.555 - 49.761: 100.0000% ( 2) 00:34:06.931 00:34:06.931 Complete histogram 00:34:06.931 ================== 00:34:06.931 Range in us Cumulative Count 00:34:06.931 7.557 - 7.608: 2.5719% ( 169) 00:34:06.931 7.608 - 7.659: 21.6253% ( 1252) 00:34:06.931 7.659 - 7.711: 50.3576% ( 1888) 00:34:06.931 7.711 - 7.762: 67.2957% ( 1113) 00:34:06.931 7.762 - 7.814: 73.9309% ( 436) 00:34:06.931 7.814 - 7.865: 77.3094% ( 222) 00:34:06.931 7.865 - 7.916: 79.8052% ( 164) 00:34:06.931 7.916 - 7.968: 81.2510% ( 95) 00:34:06.931 7.968 - 8.019: 82.1793% ( 61) 00:34:06.931 8.019 - 8.071: 83.2141% ( 68) 00:34:06.931 8.071 - 8.122: 84.5077% ( 85) 00:34:06.931 8.122 - 8.173: 85.4664% ( 63) 00:34:06.931 8.173 - 8.225: 86.3491% ( 58) 00:34:06.931 8.225 - 8.276: 88.1601% ( 119) 00:34:06.931 8.276 - 8.328: 90.9451% ( 183) 00:34:06.931 8.328 - 8.379: 92.7560% ( 119) 00:34:06.931 8.379 - 8.431: 93.7605% ( 66) 00:34:06.931 8.431 - 8.482: 94.4149% ( 43) 00:34:06.931 8.482 - 8.533: 95.0540% ( 42) 00:34:06.931 8.533 - 8.585: 95.6171% ( 37) 00:34:06.931 8.585 - 8.636: 95.9823% ( 24) 00:34:06.931 8.636 - 8.688: 96.6520% ( 44) 00:34:06.931 8.688 - 8.739: 97.0781% ( 28) 00:34:06.931 8.739 - 8.790: 97.4281% ( 23) 00:34:06.931 8.790 - 8.842: 97.6259% ( 13) 00:34:06.931 8.842 - 8.893: 97.7781% ( 10) 00:34:06.931 8.893 - 8.945: 97.8846% ( 7) 00:34:06.931 8.945 - 8.996: 98.0216% ( 9) 00:34:06.931 8.996 - 9.047: 98.0977% ( 5) 00:34:06.931 9.047 - 9.099: 98.1281% ( 2) 00:34:06.931 9.099 - 9.150: 98.1890% ( 4) 00:34:06.931 9.150 - 9.202: 98.2194% ( 2) 00:34:06.931 9.202 - 9.253: 98.2499% ( 2) 00:34:06.931 9.253 - 9.304: 98.2955% ( 3) 00:34:06.931 9.356 - 9.407: 98.3108% ( 1) 00:34:06.931 9.407 - 9.459: 98.3564% ( 3) 00:34:06.931 9.561 - 9.613: 98.3716% ( 1) 00:34:06.931 9.613 - 9.664: 98.3869% ( 1) 00:34:06.931 9.664 - 9.716: 98.4173% ( 2) 00:34:06.931 9.716 - 9.767: 98.4325% ( 1) 00:34:06.931 9.818 - 9.870: 98.4477% ( 1) 00:34:06.931 9.921 - 9.973: 98.4629% ( 1) 00:34:06.931 10.024 - 10.076: 98.4934% ( 2) 00:34:06.931 10.076 - 10.127: 98.5086% ( 1) 00:34:06.931 11.206 - 11.258: 98.5238% ( 1) 00:34:06.931 11.618 - 11.669: 98.5390% ( 1) 00:34:06.931 12.183 - 12.235: 98.5543% ( 1) 00:34:06.931 12.235 - 12.286: 98.5695% ( 1) 00:34:06.931 12.286 - 12.337: 98.5999% ( 2) 00:34:06.931 12.749 - 12.800: 98.6151% ( 1) 00:34:06.931 12.851 - 12.903: 98.6303% ( 1) 00:34:06.931 12.903 - 12.954: 98.6760% ( 3) 00:34:06.931 12.954 - 13.006: 98.6912% ( 1) 00:34:06.931 13.057 - 13.108: 98.7064% ( 1) 00:34:06.931 13.108 - 13.160: 98.7217% ( 1) 00:34:06.931 13.160 - 13.263: 98.7521% ( 2) 00:34:06.931 13.468 - 13.571: 98.7977% ( 3) 00:34:06.931 13.571 - 13.674: 98.8130% ( 1) 00:34:06.931 13.674 - 13.777: 98.8434% ( 2) 00:34:06.931 13.777 - 13.880: 98.8586% ( 1) 00:34:06.931 13.880 - 13.982: 98.8891% ( 2) 00:34:06.931 14.085 - 14.188: 98.9195% ( 2) 00:34:06.931 14.291 - 14.394: 98.9651% ( 3) 00:34:06.931 14.394 - 14.496: 98.9804% ( 1) 00:34:06.931 15.524 - 15.627: 98.9956% ( 1) 00:34:06.931 15.730 - 15.833: 99.0108% ( 1) 00:34:06.931 17.272 - 17.375: 99.0412% ( 2) 00:34:06.931 17.375 - 17.478: 99.0565% ( 1) 00:34:06.931 17.889 - 17.992: 99.0869% ( 2) 00:34:06.931 17.992 - 18.095: 99.1021% ( 1) 00:34:06.931 18.095 - 18.198: 99.1478% ( 3) 00:34:06.931 18.198 - 18.300: 99.1782% ( 2) 00:34:06.931 18.506 - 18.609: 99.1934% ( 1) 00:34:06.931 18.609 - 18.712: 99.2391% ( 3) 00:34:06.931 18.712 - 18.814: 99.2695% ( 2) 00:34:06.931 18.917 - 19.020: 99.3000% ( 2) 00:34:06.931 19.020 - 19.123: 99.3152% ( 1) 00:34:06.931 19.123 - 19.226: 99.3304% ( 1) 00:34:06.931 19.226 - 19.329: 99.3456% ( 1) 00:34:06.931 19.329 - 19.431: 99.3608% ( 1) 00:34:06.931 19.431 - 19.534: 99.3760% ( 1) 00:34:06.931 19.534 - 19.637: 99.3913% ( 1) 00:34:06.931 19.637 - 19.740: 99.4065% ( 1) 00:34:06.931 19.740 - 19.843: 99.4521% ( 3) 00:34:06.931 19.843 - 19.945: 99.4826% ( 2) 00:34:06.931 19.945 - 20.048: 99.5130% ( 2) 00:34:06.931 20.048 - 20.151: 99.5434% ( 2) 00:34:06.931 20.151 - 20.254: 99.5891% ( 3) 00:34:06.931 20.254 - 20.357: 99.6195% ( 2) 00:34:06.931 20.357 - 20.459: 99.6348% ( 1) 00:34:06.931 20.459 - 20.562: 99.6652% ( 2) 00:34:06.931 20.562 - 20.665: 99.7109% ( 3) 00:34:06.931 20.871 - 20.973: 99.7413% ( 2) 00:34:06.931 20.973 - 21.076: 99.7565% ( 1) 00:34:06.931 21.282 - 21.385: 99.7717% ( 1) 00:34:06.931 21.693 - 21.796: 99.7869% ( 1) 00:34:06.931 21.796 - 21.899: 99.8022% ( 1) 00:34:06.931 22.104 - 22.207: 99.8174% ( 1) 00:34:06.931 23.544 - 23.647: 99.8326% ( 1) 00:34:06.931 23.647 - 23.749: 99.8478% ( 1) 00:34:06.932 24.983 - 25.086: 99.8630% ( 1) 00:34:06.932 28.787 - 28.993: 99.8783% ( 1) 00:34:06.932 29.404 - 29.610: 99.9087% ( 2) 00:34:06.932 29.815 - 30.021: 99.9239% ( 1) 00:34:06.932 30.021 - 30.227: 99.9391% ( 1) 00:34:06.932 31.049 - 31.255: 99.9543% ( 1) 00:34:06.932 39.891 - 40.096: 99.9696% ( 1) 00:34:06.932 44.620 - 44.826: 99.9848% ( 1) 00:34:06.932 63.332 - 63.743: 100.0000% ( 1) 00:34:06.932 00:34:06.932 ************************************ 00:34:06.932 END TEST nvme_overhead 00:34:06.932 ************************************ 00:34:06.932 00:34:06.932 real 0m1.349s 00:34:06.932 user 0m1.134s 00:34:06.932 sys 0m0.140s 00:34:06.932 12:53:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:06.932 12:53:49 -- common/autotest_common.sh@10 -- # set +x 00:34:06.932 12:53:49 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:34:06.932 12:53:49 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:34:06.932 12:53:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:06.932 12:53:49 -- common/autotest_common.sh@10 -- # set +x 00:34:06.932 ************************************ 00:34:06.932 START TEST nvme_arbitration 00:34:06.932 ************************************ 00:34:06.932 12:53:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:34:11.124 Initializing NVMe Controllers 00:34:11.124 Attached to 0000:00:06.0 00:34:11.124 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:34:11.124 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:34:11.124 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:34:11.124 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:34:11.124 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:34:11.124 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:34:11.124 Initialization complete. Launching workers. 00:34:11.124 Starting thread on core 1 with urgent priority queue 00:34:11.124 Starting thread on core 2 with urgent priority queue 00:34:11.124 Starting thread on core 0 with urgent priority queue 00:34:11.124 Starting thread on core 3 with urgent priority queue 00:34:11.124 QEMU NVMe Ctrl (12340 ) core 0: 1130.67 IO/s 88.44 secs/100000 ios 00:34:11.124 QEMU NVMe Ctrl (12340 ) core 1: 832.00 IO/s 120.19 secs/100000 ios 00:34:11.124 QEMU NVMe Ctrl (12340 ) core 2: 405.33 IO/s 246.71 secs/100000 ios 00:34:11.124 QEMU NVMe Ctrl (12340 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:34:11.124 ======================================================== 00:34:11.124 00:34:11.124 ************************************ 00:34:11.124 END TEST nvme_arbitration 00:34:11.124 ************************************ 00:34:11.124 00:34:11.124 real 0m3.657s 00:34:11.124 user 0m9.638s 00:34:11.124 sys 0m0.189s 00:34:11.124 12:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:11.124 12:53:52 -- common/autotest_common.sh@10 -- # set +x 00:34:11.124 12:53:52 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:34:11.124 12:53:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:34:11.124 12:53:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:11.124 12:53:52 -- common/autotest_common.sh@10 -- # set +x 00:34:11.124 ************************************ 00:34:11.124 START TEST nvme_single_aen 00:34:11.124 ************************************ 00:34:11.124 12:53:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:34:11.124 [2024-10-01 12:53:53.061622] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:34:11.124 [2024-10-01 12:53:53.061707] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.124 [2024-10-01 12:53:53.309707] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:34:11.124 Asynchronous Event Request test 00:34:11.124 Attached to 0000:00:06.0 00:34:11.124 Reset controller to setup AER completions for this process 00:34:11.124 Registering asynchronous event callbacks... 00:34:11.124 Getting orig temperature thresholds of all controllers 00:34:11.124 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:34:11.124 Setting all controllers temperature threshold low to trigger AER 00:34:11.124 Waiting for all controllers temperature threshold to be set lower 00:34:11.124 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:34:11.124 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:34:11.124 Waiting for all controllers to trigger AER and reset threshold 00:34:11.124 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:34:11.124 Cleaning up... 00:34:11.124 ************************************ 00:34:11.124 END TEST nvme_single_aen 00:34:11.124 ************************************ 00:34:11.124 00:34:11.124 real 0m0.377s 00:34:11.124 user 0m0.116s 00:34:11.124 sys 0m0.171s 00:34:11.124 12:53:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:11.124 12:53:53 -- common/autotest_common.sh@10 -- # set +x 00:34:11.124 12:53:53 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:34:11.124 12:53:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:11.124 12:53:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:11.124 12:53:53 -- common/autotest_common.sh@10 -- # set +x 00:34:11.124 ************************************ 00:34:11.124 START TEST nvme_doorbell_aers 00:34:11.124 ************************************ 00:34:11.124 12:53:53 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:34:11.124 12:53:53 -- nvme/nvme.sh@70 -- # bdfs=() 00:34:11.124 12:53:53 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:34:11.124 12:53:53 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:34:11.124 12:53:53 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:34:11.124 12:53:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:11.124 12:53:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:34:11.124 12:53:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:11.124 12:53:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:11.124 12:53:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:11.124 12:53:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:11.124 12:53:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:34:11.124 12:53:53 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:34:11.124 12:53:53 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:34:11.383 [2024-10-01 12:53:53.879474] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 140247) is not found. Dropping the request. 00:34:21.360 Executing: test_write_invalid_db 00:34:21.360 Waiting for AER completion... 00:34:21.360 Failure: test_write_invalid_db 00:34:21.360 00:34:21.360 Executing: test_invalid_db_write_overflow_sq 00:34:21.360 Waiting for AER completion... 00:34:21.360 Failure: test_invalid_db_write_overflow_sq 00:34:21.360 00:34:21.360 Executing: test_invalid_db_write_overflow_cq 00:34:21.360 Waiting for AER completion... 00:34:21.360 Failure: test_invalid_db_write_overflow_cq 00:34:21.360 00:34:21.360 ************************************ 00:34:21.360 END TEST nvme_doorbell_aers 00:34:21.360 ************************************ 00:34:21.360 00:34:21.360 real 0m10.142s 00:34:21.360 user 0m7.347s 00:34:21.360 sys 0m2.745s 00:34:21.360 12:54:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:21.360 12:54:03 -- common/autotest_common.sh@10 -- # set +x 00:34:21.360 12:54:03 -- nvme/nvme.sh@97 -- # uname 00:34:21.360 12:54:03 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:34:21.360 12:54:03 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:34:21.360 12:54:03 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:34:21.360 12:54:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:21.360 12:54:03 -- common/autotest_common.sh@10 -- # set +x 00:34:21.360 ************************************ 00:34:21.360 START TEST nvme_multi_aen 00:34:21.360 ************************************ 00:34:21.360 12:54:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:34:21.360 [2024-10-01 12:54:03.731472] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:34:21.360 [2024-10-01 12:54:03.731623] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.618 [2024-10-01 12:54:03.974492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:34:21.618 [2024-10-01 12:54:03.974585] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 140247) is not found. Dropping the request. 00:34:21.618 [2024-10-01 12:54:03.974738] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 140247) is not found. Dropping the request. 00:34:21.618 [2024-10-01 12:54:03.974769] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 140247) is not found. Dropping the request. 00:34:21.618 [2024-10-01 12:54:03.982406] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:34:21.618 Child process pid: 140442 00:34:21.618 [2024-10-01 12:54:03.982726] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.876 [Child] Asynchronous Event Request test 00:34:21.876 [Child] Attached to 0000:00:06.0 00:34:21.876 [Child] Registering asynchronous event callbacks... 00:34:21.876 [Child] Getting orig temperature thresholds of all controllers 00:34:21.876 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:34:21.876 [Child] Waiting for all controllers to trigger AER and reset threshold 00:34:21.876 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:34:21.876 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:34:21.876 [Child] Cleaning up... 00:34:22.136 Asynchronous Event Request test 00:34:22.136 Attached to 0000:00:06.0 00:34:22.136 Reset controller to setup AER completions for this process 00:34:22.136 Registering asynchronous event callbacks... 00:34:22.136 Getting orig temperature thresholds of all controllers 00:34:22.136 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:34:22.136 Setting all controllers temperature threshold low to trigger AER 00:34:22.136 Waiting for all controllers temperature threshold to be set lower 00:34:22.136 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:34:22.136 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:34:22.136 Waiting for all controllers to trigger AER and reset threshold 00:34:22.136 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:34:22.136 Cleaning up... 00:34:22.136 ************************************ 00:34:22.136 END TEST nvme_multi_aen 00:34:22.136 ************************************ 00:34:22.136 00:34:22.136 real 0m0.786s 00:34:22.136 user 0m0.275s 00:34:22.136 sys 0m0.323s 00:34:22.136 12:54:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:22.136 12:54:04 -- common/autotest_common.sh@10 -- # set +x 00:34:22.136 12:54:04 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:34:22.136 12:54:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:34:22.136 12:54:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:22.136 12:54:04 -- common/autotest_common.sh@10 -- # set +x 00:34:22.136 ************************************ 00:34:22.136 START TEST nvme_startup 00:34:22.136 ************************************ 00:34:22.136 12:54:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:34:22.394 Initializing NVMe Controllers 00:34:22.394 Attached to 0000:00:06.0 00:34:22.394 Initialization complete. 00:34:22.394 Time used:248015.781 (us). 00:34:22.394 ************************************ 00:34:22.394 END TEST nvme_startup 00:34:22.394 ************************************ 00:34:22.394 00:34:22.394 real 0m0.361s 00:34:22.394 user 0m0.127s 00:34:22.394 sys 0m0.159s 00:34:22.394 12:54:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:22.394 12:54:04 -- common/autotest_common.sh@10 -- # set +x 00:34:22.651 12:54:04 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:34:22.651 12:54:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:22.651 12:54:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:22.651 12:54:04 -- common/autotest_common.sh@10 -- # set +x 00:34:22.651 ************************************ 00:34:22.651 START TEST nvme_multi_secondary 00:34:22.651 ************************************ 00:34:22.651 12:54:04 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:34:22.651 12:54:04 -- nvme/nvme.sh@52 -- # pid0=140511 00:34:22.651 12:54:04 -- nvme/nvme.sh@54 -- # pid1=140512 00:34:22.651 12:54:04 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:34:22.651 12:54:04 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:34:22.651 12:54:04 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:34:25.933 Initializing NVMe Controllers 00:34:25.933 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:34:25.933 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:34:25.933 Initialization complete. Launching workers. 00:34:25.933 ======================================================== 00:34:25.933 Latency(us) 00:34:25.933 Device Information : IOPS MiB/s Average min max 00:34:25.933 PCIE (0000:00:06.0) NSID 1 from core 1: 37258.23 145.54 429.14 140.82 1686.02 00:34:25.933 ======================================================== 00:34:25.933 Total : 37258.23 145.54 429.14 140.82 1686.02 00:34:25.933 00:34:26.191 Initializing NVMe Controllers 00:34:26.191 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:34:26.191 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:34:26.191 Initialization complete. Launching workers. 00:34:26.191 ======================================================== 00:34:26.191 Latency(us) 00:34:26.191 Device Information : IOPS MiB/s Average min max 00:34:26.191 PCIE (0000:00:06.0) NSID 1 from core 2: 16123.01 62.98 992.10 158.73 20665.56 00:34:26.191 ======================================================== 00:34:26.191 Total : 16123.01 62.98 992.10 158.73 20665.56 00:34:26.191 00:34:26.191 12:54:08 -- nvme/nvme.sh@56 -- # wait 140511 00:34:28.718 Initializing NVMe Controllers 00:34:28.718 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:34:28.718 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:34:28.718 Initialization complete. Launching workers. 00:34:28.718 ======================================================== 00:34:28.718 Latency(us) 00:34:28.718 Device Information : IOPS MiB/s Average min max 00:34:28.718 PCIE (0000:00:06.0) NSID 1 from core 0: 45194.20 176.54 353.79 114.33 2220.88 00:34:28.718 ======================================================== 00:34:28.718 Total : 45194.20 176.54 353.79 114.33 2220.88 00:34:28.718 00:34:28.718 12:54:10 -- nvme/nvme.sh@57 -- # wait 140512 00:34:28.718 12:54:10 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:34:28.718 12:54:10 -- nvme/nvme.sh@61 -- # pid0=140591 00:34:28.718 12:54:10 -- nvme/nvme.sh@63 -- # pid1=140592 00:34:28.718 12:54:10 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:34:28.718 12:54:10 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:34:32.041 Initializing NVMe Controllers 00:34:32.041 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:34:32.041 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:34:32.041 Initialization complete. Launching workers. 00:34:32.041 ======================================================== 00:34:32.041 Latency(us) 00:34:32.041 Device Information : IOPS MiB/s Average min max 00:34:32.041 PCIE (0000:00:06.0) NSID 1 from core 0: 35657.65 139.29 448.48 137.02 1836.25 00:34:32.041 ======================================================== 00:34:32.041 Total : 35657.65 139.29 448.48 137.02 1836.25 00:34:32.041 00:34:32.041 Initializing NVMe Controllers 00:34:32.041 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:34:32.041 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:34:32.041 Initialization complete. Launching workers. 00:34:32.041 ======================================================== 00:34:32.041 Latency(us) 00:34:32.041 Device Information : IOPS MiB/s Average min max 00:34:32.041 PCIE (0000:00:06.0) NSID 1 from core 1: 36698.67 143.35 435.68 156.40 8399.87 00:34:32.041 ======================================================== 00:34:32.041 Total : 36698.67 143.35 435.68 156.40 8399.87 00:34:32.041 00:34:34.572 Initializing NVMe Controllers 00:34:34.572 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:34:34.572 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:34:34.572 Initialization complete. Launching workers. 00:34:34.572 ======================================================== 00:34:34.572 Latency(us) 00:34:34.572 Device Information : IOPS MiB/s Average min max 00:34:34.572 PCIE (0000:00:06.0) NSID 1 from core 2: 16481.85 64.38 970.07 160.94 28859.00 00:34:34.572 ======================================================== 00:34:34.572 Total : 16481.85 64.38 970.07 160.94 28859.00 00:34:34.572 00:34:34.572 ************************************ 00:34:34.572 END TEST nvme_multi_secondary 00:34:34.572 ************************************ 00:34:34.572 12:54:16 -- nvme/nvme.sh@65 -- # wait 140591 00:34:34.572 12:54:16 -- nvme/nvme.sh@66 -- # wait 140592 00:34:34.572 00:34:34.572 real 0m11.557s 00:34:34.572 user 0m18.800s 00:34:34.572 sys 0m1.047s 00:34:34.572 12:54:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:34.572 12:54:16 -- common/autotest_common.sh@10 -- # set +x 00:34:34.572 12:54:16 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:34:34.572 12:54:16 -- nvme/nvme.sh@102 -- # kill_stub 00:34:34.572 12:54:16 -- common/autotest_common.sh@1065 -- # [[ -e /proc/139800 ]] 00:34:34.572 12:54:16 -- common/autotest_common.sh@1066 -- # kill 139800 00:34:34.572 12:54:16 -- common/autotest_common.sh@1067 -- # wait 139800 00:34:34.831 [2024-10-01 12:54:17.319371] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 140441) is not found. Dropping the request. 00:34:34.831 [2024-10-01 12:54:17.319518] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 140441) is not found. Dropping the request. 00:34:34.831 [2024-10-01 12:54:17.319581] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 140441) is not found. Dropping the request. 00:34:34.831 [2024-10-01 12:54:17.319621] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 140441) is not found. Dropping the request. 00:34:35.128 12:54:17 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:34:35.128 12:54:17 -- common/autotest_common.sh@1073 -- # echo 2 00:34:35.128 12:54:17 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:34:35.128 12:54:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:35.128 12:54:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:35.128 12:54:17 -- common/autotest_common.sh@10 -- # set +x 00:34:35.128 ************************************ 00:34:35.128 START TEST bdev_nvme_reset_stuck_adm_cmd 00:34:35.128 ************************************ 00:34:35.128 12:54:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:34:35.411 * Looking for test storage... 00:34:35.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:35.411 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:34:35.411 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:34:35.411 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:34:35.411 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:34:35.411 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:34:35.411 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:34:35.411 12:54:17 -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:35.411 12:54:17 -- common/autotest_common.sh@1509 -- # local bdfs 00:34:35.411 12:54:17 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:35.411 12:54:17 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:35.411 12:54:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:35.411 12:54:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:34:35.411 12:54:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:35.411 12:54:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:35.411 12:54:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:35.411 12:54:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:35.411 12:54:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:34:35.411 12:54:17 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:34:35.411 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:34:35.411 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:34:35.411 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=140757 00:34:35.412 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:34:35.412 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:35.412 12:54:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 140757 00:34:35.412 12:54:17 -- common/autotest_common.sh@819 -- # '[' -z 140757 ']' 00:34:35.412 12:54:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.412 12:54:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:35.412 12:54:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.412 12:54:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:35.412 12:54:17 -- common/autotest_common.sh@10 -- # set +x 00:34:35.412 [2024-10-01 12:54:17.893208] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:34:35.412 [2024-10-01 12:54:17.893357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140757 ] 00:34:35.670 [2024-10-01 12:54:18.093386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:35.929 [2024-10-01 12:54:18.359233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:35.929 [2024-10-01 12:54:18.360078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.929 [2024-10-01 12:54:18.360203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.929 [2024-10-01 12:54:18.360307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.929 [2024-10-01 12:54:18.360321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:36.866 12:54:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:36.866 12:54:19 -- common/autotest_common.sh@852 -- # return 0 00:34:36.866 12:54:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:34:36.866 12:54:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.866 12:54:19 -- common/autotest_common.sh@10 -- # set +x 00:34:37.125 nvme0n1 00:34:37.126 12:54:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.126 12:54:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:34:37.126 12:54:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_TvymX.txt 00:34:37.126 12:54:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:34:37.126 12:54:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.126 12:54:19 -- common/autotest_common.sh@10 -- # set +x 00:34:37.126 true 00:34:37.126 12:54:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.126 12:54:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:34:37.126 12:54:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727787259 00:34:37.126 12:54:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=140793 00:34:37.126 12:54:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:37.126 12:54:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:34:37.126 12:54:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:34:39.032 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:34:39.032 12:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.032 12:54:21 -- common/autotest_common.sh@10 -- # set +x 00:34:39.032 [2024-10-01 12:54:21.487761] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:34:39.032 [2024-10-01 12:54:21.488345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:39.032 [2024-10-01 12:54:21.488437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:34:39.032 [2024-10-01 12:54:21.488469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.032 [2024-10-01 12:54:21.490458] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:39.032 12:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.032 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 140793 00:34:39.032 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 140793 00:34:39.032 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 140793 00:34:39.032 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:34:39.032 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:34:39.032 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.032 12:54:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.032 12:54:21 -- common/autotest_common.sh@10 -- # set +x 00:34:39.032 12:54:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.032 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:34:39.032 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_TvymX.txt 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_TvymX.txt 00:34:39.289 12:54:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 140757 00:34:39.289 12:54:21 -- common/autotest_common.sh@926 -- # '[' -z 140757 ']' 00:34:39.289 12:54:21 -- common/autotest_common.sh@930 -- # kill -0 140757 00:34:39.289 12:54:21 -- common/autotest_common.sh@931 -- # uname 00:34:39.289 12:54:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:39.289 12:54:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140757 00:34:39.289 12:54:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:39.289 12:54:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:39.289 12:54:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140757' 00:34:39.289 killing process with pid 140757 00:34:39.289 12:54:21 -- common/autotest_common.sh@945 -- # kill 140757 00:34:39.289 12:54:21 -- common/autotest_common.sh@950 -- # wait 140757 00:34:41.822 12:54:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:34:41.822 12:54:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:34:41.822 00:34:41.822 real 0m6.693s 00:34:41.822 user 0m22.965s 00:34:41.822 sys 0m0.840s 00:34:41.822 12:54:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:41.822 12:54:24 -- common/autotest_common.sh@10 -- # set +x 00:34:41.822 ************************************ 00:34:41.822 END TEST bdev_nvme_reset_stuck_adm_cmd 00:34:41.822 ************************************ 00:34:42.082 12:54:24 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:34:42.082 12:54:24 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:34:42.082 12:54:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:42.082 12:54:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:42.082 12:54:24 -- common/autotest_common.sh@10 -- # set +x 00:34:42.082 ************************************ 00:34:42.082 START TEST nvme_fio 00:34:42.082 ************************************ 00:34:42.082 12:54:24 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:34:42.082 12:54:24 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:34:42.082 12:54:24 -- nvme/nvme.sh@32 -- # ran_fio=false 00:34:42.082 12:54:24 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:34:42.082 12:54:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:42.082 12:54:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:34:42.082 12:54:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:42.082 12:54:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:42.082 12:54:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:42.082 12:54:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:42.082 12:54:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:34:42.082 12:54:24 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:34:42.082 12:54:24 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:34:42.082 12:54:24 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:34:42.082 12:54:24 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:34:42.082 12:54:24 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:34:42.340 12:54:24 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:34:42.340 12:54:24 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:34:42.597 12:54:25 -- nvme/nvme.sh@41 -- # bs=4096 00:34:42.597 12:54:25 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:34:42.598 12:54:25 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:34:42.598 12:54:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:42.598 12:54:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:42.598 12:54:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:42.598 12:54:25 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:34:42.598 12:54:25 -- common/autotest_common.sh@1320 -- # shift 00:34:42.598 12:54:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:42.598 12:54:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:42.598 12:54:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:34:42.598 12:54:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:42.598 12:54:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:42.598 12:54:25 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:34:42.598 12:54:25 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:34:42.598 12:54:25 -- common/autotest_common.sh@1326 -- # break 00:34:42.598 12:54:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:34:42.598 12:54:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:34:42.856 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:42.856 fio-3.35 00:34:42.856 Starting 1 thread 00:34:47.049 00:34:47.049 test: (groupid=0, jobs=1): err= 0: pid=140943: Tue Oct 1 12:54:28 2024 00:34:47.049 read: IOPS=21.6k, BW=84.5MiB/s (88.6MB/s)(169MiB/2001msec) 00:34:47.049 slat (usec): min=3, max=180, avg= 4.63, stdev= 1.87 00:34:47.049 clat (usec): min=242, max=11699, avg=2946.19, stdev=782.42 00:34:47.049 lat (usec): min=246, max=11879, avg=2950.82, stdev=783.42 00:34:47.049 clat percentiles (usec): 00:34:47.049 | 1.00th=[ 1844], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2671], 00:34:47.049 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:34:47.049 | 70.00th=[ 2868], 80.00th=[ 2966], 90.00th=[ 3359], 95.00th=[ 3884], 00:34:47.049 | 99.00th=[ 7504], 99.50th=[ 8225], 99.90th=[ 8848], 99.95th=[ 9110], 00:34:47.049 | 99.99th=[11338] 00:34:47.049 bw ( KiB/s): min=80560, max=85552, per=96.70%, avg=83712.00, stdev=2742.45, samples=3 00:34:47.049 iops : min=20140, max=21388, avg=20928.00, stdev=685.61, samples=3 00:34:47.049 write: IOPS=21.5k, BW=83.9MiB/s (88.0MB/s)(168MiB/2001msec); 0 zone resets 00:34:47.049 slat (nsec): min=3742, max=54001, avg=4760.56, stdev=1601.31 00:34:47.049 clat (usec): min=184, max=11493, avg=2960.72, stdev=795.07 00:34:47.049 lat (usec): min=189, max=11535, avg=2965.48, stdev=796.02 00:34:47.049 clat percentiles (usec): 00:34:47.049 | 1.00th=[ 1860], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2671], 00:34:47.049 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:34:47.049 | 70.00th=[ 2868], 80.00th=[ 2966], 90.00th=[ 3359], 95.00th=[ 3949], 00:34:47.049 | 99.00th=[ 7504], 99.50th=[ 8225], 99.90th=[ 8848], 99.95th=[ 9110], 00:34:47.049 | 99.99th=[10945] 00:34:47.050 bw ( KiB/s): min=80456, max=85784, per=97.52%, avg=83784.00, stdev=2901.65, samples=3 00:34:47.050 iops : min=20114, max=21446, avg=20946.00, stdev=725.41, samples=3 00:34:47.050 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:34:47.050 lat (msec) : 2=1.43%, 4=93.74%, 10=4.75%, 20=0.03% 00:34:47.050 cpu : usr=99.95%, sys=0.00%, ctx=10, majf=0, minf=36 00:34:47.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:47.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.050 issued rwts: total=43304,42979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.050 00:34:47.050 Run status group 0 (all jobs): 00:34:47.050 READ: bw=84.5MiB/s (88.6MB/s), 84.5MiB/s-84.5MiB/s (88.6MB/s-88.6MB/s), io=169MiB (177MB), run=2001-2001msec 00:34:47.050 WRITE: bw=83.9MiB/s (88.0MB/s), 83.9MiB/s-83.9MiB/s (88.0MB/s-88.0MB/s), io=168MiB (176MB), run=2001-2001msec 00:34:47.050 ----------------------------------------------------- 00:34:47.050 Suppressions used: 00:34:47.050 count bytes template 00:34:47.050 1 32 /usr/src/fio/parse.c 00:34:47.050 ----------------------------------------------------- 00:34:47.050 00:34:47.050 12:54:29 -- nvme/nvme.sh@44 -- # ran_fio=true 00:34:47.050 12:54:29 -- nvme/nvme.sh@46 -- # true 00:34:47.050 00:34:47.050 real 0m4.661s 00:34:47.050 user 0m3.646s 00:34:47.050 sys 0m0.981s 00:34:47.050 12:54:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:47.050 12:54:29 -- common/autotest_common.sh@10 -- # set +x 00:34:47.050 ************************************ 00:34:47.050 END TEST nvme_fio 00:34:47.050 ************************************ 00:34:47.050 00:34:47.050 real 0m51.441s 00:34:47.050 user 2m12.907s 00:34:47.050 sys 0m11.788s 00:34:47.050 12:54:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:47.050 12:54:29 -- common/autotest_common.sh@10 -- # set +x 00:34:47.050 ************************************ 00:34:47.050 END TEST nvme 00:34:47.050 ************************************ 00:34:47.050 12:54:29 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:34:47.050 12:54:29 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:34:47.050 12:54:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:47.050 12:54:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:47.050 12:54:29 -- common/autotest_common.sh@10 -- # set +x 00:34:47.050 ************************************ 00:34:47.050 START TEST nvme_scc 00:34:47.050 ************************************ 00:34:47.050 12:54:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:34:47.050 * Looking for test storage... 00:34:47.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:47.050 12:54:29 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:34:47.050 12:54:29 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:34:47.050 12:54:29 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:34:47.050 12:54:29 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:47.050 12:54:29 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:47.050 12:54:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.050 12:54:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.050 12:54:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.050 12:54:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:47.050 12:54:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:47.050 12:54:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:47.050 12:54:29 -- paths/export.sh@5 -- # export PATH 00:34:47.050 12:54:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:47.050 12:54:29 -- nvme/functions.sh@10 -- # ctrls=() 00:34:47.050 12:54:29 -- nvme/functions.sh@10 -- # declare -A ctrls 00:34:47.050 12:54:29 -- nvme/functions.sh@11 -- # nvmes=() 00:34:47.050 12:54:29 -- nvme/functions.sh@11 -- # declare -A nvmes 00:34:47.050 12:54:29 -- nvme/functions.sh@12 -- # bdfs=() 00:34:47.050 12:54:29 -- nvme/functions.sh@12 -- # declare -A bdfs 00:34:47.050 12:54:29 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:34:47.050 12:54:29 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:34:47.050 12:54:29 -- nvme/functions.sh@14 -- # nvme_name= 00:34:47.050 12:54:29 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:47.050 12:54:29 -- nvme/nvme_scc.sh@12 -- # uname 00:34:47.050 12:54:29 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:34:47.050 12:54:29 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:34:47.050 12:54:29 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:47.307 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:34:47.307 Waiting for block devices as requested 00:34:47.566 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:34:47.566 12:54:30 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:34:47.566 12:54:30 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:34:47.566 12:54:30 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:34:47.566 12:54:30 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:34:47.566 12:54:30 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:34:47.566 12:54:30 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:34:47.566 12:54:30 -- scripts/common.sh@15 -- # local i 00:34:47.566 12:54:30 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:34:47.566 12:54:30 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:34:47.566 12:54:30 -- scripts/common.sh@24 -- # return 0 00:34:47.566 12:54:30 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:34:47.566 12:54:30 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:34:47.566 12:54:30 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:34:47.566 12:54:30 -- nvme/functions.sh@18 -- # shift 00:34:47.566 12:54:30 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.566 12:54:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:34:47.566 12:54:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.566 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:34:47.566 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:34:47.566 12:54:30 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.566 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:34:47.566 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:34:47.566 12:54:30 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.566 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:34:47.566 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:34:47.566 12:54:30 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.566 12:54:30 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:34:47.566 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:34:47.566 12:54:30 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.566 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.566 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:34:47.566 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.567 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:34:47.567 12:54:30 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:34:47.567 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.568 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.568 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:34:47.568 12:54:30 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:34:47.569 12:54:30 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:34:47.569 12:54:30 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:34:47.569 12:54:30 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:34:47.569 12:54:30 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:34:47.569 12:54:30 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:34:47.569 12:54:30 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@18 -- # shift 00:34:47.569 12:54:30 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.569 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.569 12:54:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.829 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:34:47.829 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.829 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:47.830 12:54:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # IFS=: 00:34:47.830 12:54:30 -- nvme/functions.sh@21 -- # read -r reg val 00:34:47.830 12:54:30 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:34:47.830 12:54:30 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:34:47.830 12:54:30 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:34:47.830 12:54:30 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:34:47.830 12:54:30 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:34:47.830 12:54:30 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:34:47.830 12:54:30 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:34:47.830 12:54:30 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:34:47.830 12:54:30 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:34:47.830 12:54:30 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:34:47.830 12:54:30 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:34:47.830 12:54:30 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:34:47.830 12:54:30 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:34:47.830 12:54:30 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:34:47.830 12:54:30 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:34:47.830 12:54:30 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:34:47.830 12:54:30 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:34:47.830 12:54:30 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:34:47.830 12:54:30 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:34:47.830 12:54:30 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:34:47.830 12:54:30 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:34:47.831 12:54:30 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:34:47.831 12:54:30 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:34:47.831 12:54:30 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:34:47.831 12:54:30 -- nvme/functions.sh@76 -- # echo 0x15d 00:34:47.831 12:54:30 -- nvme/functions.sh@184 -- # oncs=0x15d 00:34:47.831 12:54:30 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:34:47.831 12:54:30 -- nvme/functions.sh@197 -- # echo nvme0 00:34:47.831 12:54:30 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:34:47.831 12:54:30 -- nvme/functions.sh@206 -- # echo nvme0 00:34:47.831 12:54:30 -- nvme/functions.sh@207 -- # return 0 00:34:47.831 12:54:30 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:34:47.831 12:54:30 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:34:47.831 12:54:30 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:48.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:34:48.398 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:34:49.358 12:54:31 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:34:49.358 12:54:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:34:49.358 12:54:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:49.358 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:34:49.358 ************************************ 00:34:49.358 START TEST nvme_simple_copy 00:34:49.358 ************************************ 00:34:49.358 12:54:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:34:49.616 Initializing NVMe Controllers 00:34:49.616 Attaching to 0000:00:06.0 00:34:49.616 Controller supports SCC. Attached to 0000:00:06.0 00:34:49.616 Namespace ID: 1 size: 5GB 00:34:49.616 Initialization complete. 00:34:49.616 00:34:49.616 Controller QEMU NVMe Ctrl (12340 ) 00:34:49.616 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:34:49.616 Namespace Block Size:4096 00:34:49.616 Writing LBAs 0 to 63 with Random Data 00:34:49.616 Copied LBAs from 0 - 63 to the Destination LBA 256 00:34:49.616 LBAs matching Written Data: 64 00:34:49.616 00:34:49.616 real 0m0.322s 00:34:49.616 user 0m0.116s 00:34:49.616 sys 0m0.108s 00:34:49.616 12:54:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:49.616 12:54:32 -- common/autotest_common.sh@10 -- # set +x 00:34:49.616 ************************************ 00:34:49.616 END TEST nvme_simple_copy 00:34:49.616 ************************************ 00:34:49.875 00:34:49.875 real 0m2.969s 00:34:49.875 user 0m0.844s 00:34:49.875 sys 0m2.025s 00:34:49.875 12:54:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:49.875 12:54:32 -- common/autotest_common.sh@10 -- # set +x 00:34:49.875 ************************************ 00:34:49.875 END TEST nvme_scc 00:34:49.875 ************************************ 00:34:49.875 12:54:32 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:34:49.875 12:54:32 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:34:49.875 12:54:32 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:34:49.875 12:54:32 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:34:49.875 12:54:32 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:34:49.875 12:54:32 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:34:49.875 12:54:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:49.875 12:54:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:49.875 12:54:32 -- common/autotest_common.sh@10 -- # set +x 00:34:49.875 ************************************ 00:34:49.875 START TEST nvme_rpc 00:34:49.875 ************************************ 00:34:49.875 12:54:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:34:49.875 * Looking for test storage... 00:34:49.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:49.875 12:54:32 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:49.875 12:54:32 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:34:49.875 12:54:32 -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:49.875 12:54:32 -- common/autotest_common.sh@1509 -- # local bdfs 00:34:49.875 12:54:32 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:49.875 12:54:32 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:49.875 12:54:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:49.875 12:54:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:34:49.875 12:54:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:49.876 12:54:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:49.876 12:54:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:50.137 12:54:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:50.137 12:54:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:34:50.137 12:54:32 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:34:50.137 12:54:32 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:34:50.137 12:54:32 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=141440 00:34:50.137 12:54:32 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:34:50.137 12:54:32 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:34:50.137 12:54:32 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 141440 00:34:50.137 12:54:32 -- common/autotest_common.sh@819 -- # '[' -z 141440 ']' 00:34:50.138 12:54:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.138 12:54:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:50.138 12:54:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.138 12:54:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:50.138 12:54:32 -- common/autotest_common.sh@10 -- # set +x 00:34:50.138 [2024-10-01 12:54:32.528942] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:34:50.138 [2024-10-01 12:54:32.529082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141440 ] 00:34:50.440 [2024-10-01 12:54:32.690521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:50.440 [2024-10-01 12:54:32.942138] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:50.440 [2024-10-01 12:54:32.942579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.440 [2024-10-01 12:54:32.942585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.815 12:54:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:51.815 12:54:34 -- common/autotest_common.sh@852 -- # return 0 00:34:51.815 12:54:34 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:34:51.815 Nvme0n1 00:34:51.815 12:54:34 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:34:51.815 12:54:34 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:34:52.073 request: 00:34:52.073 { 00:34:52.073 "filename": "non_existing_file", 00:34:52.073 "bdev_name": "Nvme0n1", 00:34:52.073 "method": "bdev_nvme_apply_firmware", 00:34:52.073 "req_id": 1 00:34:52.073 } 00:34:52.073 Got JSON-RPC error response 00:34:52.073 response: 00:34:52.073 { 00:34:52.073 "code": -32603, 00:34:52.073 "message": "open file failed." 00:34:52.073 } 00:34:52.073 12:54:34 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:34:52.073 12:54:34 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:34:52.073 12:54:34 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:52.331 12:54:34 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:34:52.331 12:54:34 -- nvme/nvme_rpc.sh@40 -- # killprocess 141440 00:34:52.331 12:54:34 -- common/autotest_common.sh@926 -- # '[' -z 141440 ']' 00:34:52.331 12:54:34 -- common/autotest_common.sh@930 -- # kill -0 141440 00:34:52.331 12:54:34 -- common/autotest_common.sh@931 -- # uname 00:34:52.331 12:54:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:52.331 12:54:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141440 00:34:52.331 12:54:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:52.331 killing process with pid 141440 00:34:52.331 12:54:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:52.331 12:54:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141440' 00:34:52.331 12:54:34 -- common/autotest_common.sh@945 -- # kill 141440 00:34:52.331 12:54:34 -- common/autotest_common.sh@950 -- # wait 141440 00:34:54.864 ************************************ 00:34:54.864 END TEST nvme_rpc 00:34:54.864 ************************************ 00:34:54.864 00:34:54.864 real 0m5.016s 00:34:54.864 user 0m9.098s 00:34:54.864 sys 0m0.849s 00:34:54.864 12:54:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:54.864 12:54:37 -- common/autotest_common.sh@10 -- # set +x 00:34:54.864 12:54:37 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:34:54.864 12:54:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:54.864 12:54:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:54.864 12:54:37 -- common/autotest_common.sh@10 -- # set +x 00:34:54.864 ************************************ 00:34:54.864 START TEST nvme_rpc_timeouts 00:34:54.864 ************************************ 00:34:54.864 12:54:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:34:55.123 * Looking for test storage... 00:34:55.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:55.123 12:54:37 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:55.123 12:54:37 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_141536 00:34:55.123 12:54:37 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_141536 00:34:55.123 12:54:37 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:34:55.123 12:54:37 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=141560 00:34:55.123 12:54:37 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:34:55.123 12:54:37 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 141560 00:34:55.123 12:54:37 -- common/autotest_common.sh@819 -- # '[' -z 141560 ']' 00:34:55.123 12:54:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.123 12:54:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:55.123 12:54:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.123 12:54:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:55.123 12:54:37 -- common/autotest_common.sh@10 -- # set +x 00:34:55.123 [2024-10-01 12:54:37.520077] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:34:55.123 [2024-10-01 12:54:37.520201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141560 ] 00:34:55.382 [2024-10-01 12:54:37.690799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:55.640 [2024-10-01 12:54:37.954970] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:55.640 [2024-10-01 12:54:37.955442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.640 [2024-10-01 12:54:37.955442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.578 12:54:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:56.578 12:54:39 -- common/autotest_common.sh@852 -- # return 0 00:34:56.578 12:54:39 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:34:56.578 Checking default timeout settings: 00:34:56.578 12:54:39 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:34:56.837 Making settings changes with rpc: 00:34:56.837 12:54:39 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:34:56.837 12:54:39 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:34:57.096 Check default vs. modified settings: 00:34:57.096 12:54:39 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:34:57.096 12:54:39 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_141536 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_141536 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:34:57.355 Setting action_on_timeout is changed as expected. 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_141536 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_141536 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:34:57.355 Setting timeout_us is changed as expected. 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_141536 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:57.355 12:54:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:34:57.614 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_141536 00:34:57.614 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:34:57.614 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:34:57.614 12:54:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:34:57.614 12:54:39 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:34:57.614 12:54:39 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:34:57.614 Setting timeout_admin_us is changed as expected. 00:34:57.614 12:54:39 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:34:57.614 12:54:39 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_141536 /tmp/settings_modified_141536 00:34:57.614 12:54:39 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 141560 00:34:57.614 12:54:39 -- common/autotest_common.sh@926 -- # '[' -z 141560 ']' 00:34:57.614 12:54:39 -- common/autotest_common.sh@930 -- # kill -0 141560 00:34:57.614 12:54:39 -- common/autotest_common.sh@931 -- # uname 00:34:57.614 12:54:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:57.614 12:54:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141560 00:34:57.614 12:54:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:57.614 killing process with pid 141560 00:34:57.614 12:54:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:57.614 12:54:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141560' 00:34:57.614 12:54:39 -- common/autotest_common.sh@945 -- # kill 141560 00:34:57.614 12:54:39 -- common/autotest_common.sh@950 -- # wait 141560 00:35:00.149 RPC TIMEOUT SETTING TEST PASSED. 00:35:00.149 12:54:42 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:35:00.149 ************************************ 00:35:00.149 END TEST nvme_rpc_timeouts 00:35:00.149 ************************************ 00:35:00.149 00:35:00.149 real 0m5.279s 00:35:00.149 user 0m9.764s 00:35:00.149 sys 0m0.913s 00:35:00.149 12:54:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:00.149 12:54:42 -- common/autotest_common.sh@10 -- # set +x 00:35:00.149 12:54:42 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:35:00.149 12:54:42 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:35:00.149 12:54:42 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:35:00.149 12:54:42 -- spdk/autotest.sh@268 -- # timing_exit lib 00:35:00.149 12:54:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:00.149 12:54:42 -- common/autotest_common.sh@10 -- # set +x 00:35:00.408 12:54:42 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:00.408 12:54:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:00.408 12:54:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:00.408 12:54:42 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:00.408 12:54:42 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:35:00.408 12:54:42 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:35:00.408 12:54:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:00.408 12:54:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:00.408 12:54:42 -- common/autotest_common.sh@10 -- # set +x 00:35:00.408 ************************************ 00:35:00.408 START TEST blockdev_raid5f 00:35:00.408 ************************************ 00:35:00.408 12:54:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:35:00.408 * Looking for test storage... 00:35:00.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:35:00.408 12:54:42 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:35:00.408 12:54:42 -- bdev/nbd_common.sh@6 -- # set -e 00:35:00.408 12:54:42 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:35:00.408 12:54:42 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:00.408 12:54:42 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:35:00.408 12:54:42 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:35:00.408 12:54:42 -- bdev/blockdev.sh@18 -- # : 00:35:00.408 12:54:42 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:35:00.408 12:54:42 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:35:00.408 12:54:42 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:35:00.408 12:54:42 -- bdev/blockdev.sh@672 -- # uname -s 00:35:00.408 12:54:42 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:35:00.408 12:54:42 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:35:00.408 12:54:42 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:35:00.408 12:54:42 -- bdev/blockdev.sh@681 -- # crypto_device= 00:35:00.408 12:54:42 -- bdev/blockdev.sh@682 -- # dek= 00:35:00.408 12:54:42 -- bdev/blockdev.sh@683 -- # env_ctx= 00:35:00.408 12:54:42 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:35:00.408 12:54:42 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:35:00.408 12:54:42 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:35:00.408 12:54:42 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:35:00.408 12:54:42 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:35:00.408 12:54:42 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=141720 00:35:00.408 12:54:42 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:35:00.408 12:54:42 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:00.408 12:54:42 -- bdev/blockdev.sh@47 -- # waitforlisten 141720 00:35:00.408 12:54:42 -- common/autotest_common.sh@819 -- # '[' -z 141720 ']' 00:35:00.408 12:54:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.408 12:54:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:00.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.408 12:54:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.408 12:54:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:00.408 12:54:42 -- common/autotest_common.sh@10 -- # set +x 00:35:00.667 [2024-10-01 12:54:42.973183] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:35:00.667 [2024-10-01 12:54:42.973336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141720 ] 00:35:00.667 [2024-10-01 12:54:43.139642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.926 [2024-10-01 12:54:43.398778] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:00.926 [2024-10-01 12:54:43.399028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.306 12:54:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:02.306 12:54:44 -- common/autotest_common.sh@852 -- # return 0 00:35:02.306 12:54:44 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:35:02.306 12:54:44 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:35:02.306 12:54:44 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:35:02.306 12:54:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.306 12:54:44 -- common/autotest_common.sh@10 -- # set +x 00:35:02.306 Malloc0 00:35:02.306 Malloc1 00:35:02.306 Malloc2 00:35:02.306 12:54:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.306 12:54:44 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:35:02.306 12:54:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.306 12:54:44 -- common/autotest_common.sh@10 -- # set +x 00:35:02.306 12:54:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.306 12:54:44 -- bdev/blockdev.sh@738 -- # cat 00:35:02.306 12:54:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:35:02.307 12:54:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.307 12:54:44 -- common/autotest_common.sh@10 -- # set +x 00:35:02.307 12:54:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.307 12:54:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:35:02.307 12:54:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.307 12:54:44 -- common/autotest_common.sh@10 -- # set +x 00:35:02.307 12:54:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.307 12:54:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:35:02.307 12:54:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.307 12:54:44 -- common/autotest_common.sh@10 -- # set +x 00:35:02.307 12:54:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.307 12:54:44 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:35:02.307 12:54:44 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:35:02.307 12:54:44 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:35:02.307 12:54:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:02.307 12:54:44 -- common/autotest_common.sh@10 -- # set +x 00:35:02.307 12:54:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:02.307 12:54:44 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:35:02.307 12:54:44 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "bc04ac2e-0519-4cbd-81fd-ddd1be6489d3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bc04ac2e-0519-4cbd-81fd-ddd1be6489d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "bc04ac2e-0519-4cbd-81fd-ddd1be6489d3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "377436f3-80c7-4497-bdc9-e6c713c8a197",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ba0df14c-378c-4e97-b08f-27e1c53d15d7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "9e1a71c6-4915-4a28-9af3-7c92c0014cef",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:35:02.307 12:54:44 -- bdev/blockdev.sh@747 -- # jq -r .name 00:35:02.307 12:54:44 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:35:02.307 12:54:44 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:35:02.307 12:54:44 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:35:02.307 12:54:44 -- bdev/blockdev.sh@752 -- # killprocess 141720 00:35:02.307 12:54:44 -- common/autotest_common.sh@926 -- # '[' -z 141720 ']' 00:35:02.307 12:54:44 -- common/autotest_common.sh@930 -- # kill -0 141720 00:35:02.307 12:54:44 -- common/autotest_common.sh@931 -- # uname 00:35:02.307 12:54:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:02.307 12:54:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141720 00:35:02.307 12:54:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:02.307 killing process with pid 141720 00:35:02.307 12:54:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:02.307 12:54:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141720' 00:35:02.307 12:54:44 -- common/autotest_common.sh@945 -- # kill 141720 00:35:02.307 12:54:44 -- common/autotest_common.sh@950 -- # wait 141720 00:35:05.595 12:54:47 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:05.595 12:54:47 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:35:05.595 12:54:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:35:05.595 12:54:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:05.595 12:54:47 -- common/autotest_common.sh@10 -- # set +x 00:35:05.595 ************************************ 00:35:05.595 START TEST bdev_hello_world 00:35:05.595 ************************************ 00:35:05.595 12:54:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:35:05.595 [2024-10-01 12:54:47.856143] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:35:05.595 [2024-10-01 12:54:47.856307] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141808 ] 00:35:05.595 [2024-10-01 12:54:48.023413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.854 [2024-10-01 12:54:48.276682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.421 [2024-10-01 12:54:48.906681] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:35:06.421 [2024-10-01 12:54:48.906775] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:35:06.421 [2024-10-01 12:54:48.906817] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:35:06.421 [2024-10-01 12:54:48.907394] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:35:06.421 [2024-10-01 12:54:48.907562] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:35:06.421 [2024-10-01 12:54:48.907597] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:35:06.421 [2024-10-01 12:54:48.907671] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:35:06.421 00:35:06.421 [2024-10-01 12:54:48.907707] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:35:08.327 00:35:08.327 real 0m2.748s 00:35:08.327 user 0m2.268s 00:35:08.327 sys 0m0.364s 00:35:08.327 12:54:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:08.327 ************************************ 00:35:08.327 END TEST bdev_hello_world 00:35:08.327 12:54:50 -- common/autotest_common.sh@10 -- # set +x 00:35:08.327 ************************************ 00:35:08.327 12:54:50 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:35:08.327 12:54:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:08.327 12:54:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:08.327 12:54:50 -- common/autotest_common.sh@10 -- # set +x 00:35:08.327 ************************************ 00:35:08.327 START TEST bdev_bounds 00:35:08.327 ************************************ 00:35:08.327 12:54:50 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:35:08.327 12:54:50 -- bdev/blockdev.sh@288 -- # bdevio_pid=141865 00:35:08.327 12:54:50 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:08.327 12:54:50 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:35:08.327 Process bdevio pid: 141865 00:35:08.327 12:54:50 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 141865' 00:35:08.327 12:54:50 -- bdev/blockdev.sh@291 -- # waitforlisten 141865 00:35:08.327 12:54:50 -- common/autotest_common.sh@819 -- # '[' -z 141865 ']' 00:35:08.327 12:54:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.327 12:54:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:08.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.327 12:54:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.327 12:54:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:08.327 12:54:50 -- common/autotest_common.sh@10 -- # set +x 00:35:08.327 [2024-10-01 12:54:50.692535] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:35:08.327 [2024-10-01 12:54:50.692748] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141865 ] 00:35:08.586 [2024-10-01 12:54:50.872002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:08.899 [2024-10-01 12:54:51.148108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.899 [2024-10-01 12:54:51.148298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.899 [2024-10-01 12:54:51.148301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:09.836 12:54:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:09.836 12:54:52 -- common/autotest_common.sh@852 -- # return 0 00:35:09.836 12:54:52 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:09.836 I/O targets: 00:35:09.836 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:35:09.836 00:35:09.836 00:35:09.836 CUnit - A unit testing framework for C - Version 2.1-3 00:35:09.836 http://cunit.sourceforge.net/ 00:35:09.836 00:35:09.836 00:35:09.836 Suite: bdevio tests on: raid5f 00:35:09.836 Test: blockdev write read block ...passed 00:35:09.836 Test: blockdev write zeroes read block ...passed 00:35:09.836 Test: blockdev write zeroes read no split ...passed 00:35:10.096 Test: blockdev write zeroes read split ...passed 00:35:10.096 Test: blockdev write zeroes read split partial ...passed 00:35:10.096 Test: blockdev reset ...passed 00:35:10.096 Test: blockdev write read 8 blocks ...passed 00:35:10.096 Test: blockdev write read size > 128k ...passed 00:35:10.096 Test: blockdev write read invalid size ...passed 00:35:10.096 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:10.096 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:10.096 Test: blockdev write read max offset ...passed 00:35:10.096 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:10.096 Test: blockdev writev readv 8 blocks ...passed 00:35:10.096 Test: blockdev writev readv 30 x 1block ...passed 00:35:10.096 Test: blockdev writev readv block ...passed 00:35:10.096 Test: blockdev writev readv size > 128k ...passed 00:35:10.096 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:10.096 Test: blockdev comparev and writev ...passed 00:35:10.096 Test: blockdev nvme passthru rw ...passed 00:35:10.096 Test: blockdev nvme passthru vendor specific ...passed 00:35:10.096 Test: blockdev nvme admin passthru ...passed 00:35:10.096 Test: blockdev copy ...passed 00:35:10.096 00:35:10.096 Run Summary: Type Total Ran Passed Failed Inactive 00:35:10.096 suites 1 1 n/a 0 0 00:35:10.096 tests 23 23 23 0 0 00:35:10.096 asserts 130 130 130 0 n/a 00:35:10.096 00:35:10.096 Elapsed time = 0.621 seconds 00:35:10.096 0 00:35:10.096 12:54:52 -- bdev/blockdev.sh@293 -- # killprocess 141865 00:35:10.096 12:54:52 -- common/autotest_common.sh@926 -- # '[' -z 141865 ']' 00:35:10.096 12:54:52 -- common/autotest_common.sh@930 -- # kill -0 141865 00:35:10.096 12:54:52 -- common/autotest_common.sh@931 -- # uname 00:35:10.096 12:54:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:10.096 12:54:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141865 00:35:10.096 12:54:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:10.096 12:54:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:10.096 12:54:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141865' 00:35:10.096 killing process with pid 141865 00:35:10.096 12:54:52 -- common/autotest_common.sh@945 -- # kill 141865 00:35:10.096 12:54:52 -- common/autotest_common.sh@950 -- # wait 141865 00:35:12.002 12:54:54 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:35:12.002 00:35:12.002 real 0m3.830s 00:35:12.002 user 0m9.266s 00:35:12.002 sys 0m0.559s 00:35:12.002 12:54:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:12.002 12:54:54 -- common/autotest_common.sh@10 -- # set +x 00:35:12.002 ************************************ 00:35:12.002 END TEST bdev_bounds 00:35:12.002 ************************************ 00:35:12.003 12:54:54 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:35:12.003 12:54:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:35:12.003 12:54:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:12.003 12:54:54 -- common/autotest_common.sh@10 -- # set +x 00:35:12.003 ************************************ 00:35:12.003 START TEST bdev_nbd 00:35:12.003 ************************************ 00:35:12.003 12:54:54 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:35:12.003 12:54:54 -- bdev/blockdev.sh@298 -- # uname -s 00:35:12.003 12:54:54 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:35:12.003 12:54:54 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:12.003 12:54:54 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:12.003 12:54:54 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:35:12.003 12:54:54 -- bdev/blockdev.sh@302 -- # local bdev_all 00:35:12.003 12:54:54 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:35:12.003 12:54:54 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:35:12.003 12:54:54 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:35:12.003 12:54:54 -- bdev/blockdev.sh@309 -- # local nbd_all 00:35:12.003 12:54:54 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:35:12.003 12:54:54 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:35:12.003 12:54:54 -- bdev/blockdev.sh@312 -- # local nbd_list 00:35:12.003 12:54:54 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:35:12.003 12:54:54 -- bdev/blockdev.sh@313 -- # local bdev_list 00:35:12.003 12:54:54 -- bdev/blockdev.sh@316 -- # nbd_pid=141948 00:35:12.003 12:54:54 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:35:12.003 12:54:54 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:12.003 12:54:54 -- bdev/blockdev.sh@318 -- # waitforlisten 141948 /var/tmp/spdk-nbd.sock 00:35:12.003 12:54:54 -- common/autotest_common.sh@819 -- # '[' -z 141948 ']' 00:35:12.003 12:54:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:35:12.003 12:54:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:12.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:35:12.003 12:54:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:35:12.003 12:54:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:12.003 12:54:54 -- common/autotest_common.sh@10 -- # set +x 00:35:12.262 [2024-10-01 12:54:54.609510] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:35:12.262 [2024-10-01 12:54:54.609682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.262 [2024-10-01 12:54:54.780702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.522 [2024-10-01 12:54:55.006208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.900 12:54:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:13.900 12:54:56 -- common/autotest_common.sh@852 -- # return 0 00:35:13.900 12:54:56 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@24 -- # local i 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:13.900 12:54:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:35:13.900 12:54:56 -- common/autotest_common.sh@857 -- # local i 00:35:13.900 12:54:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:35:13.900 12:54:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:35:13.900 12:54:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:35:13.900 12:54:56 -- common/autotest_common.sh@861 -- # break 00:35:13.900 12:54:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:35:13.900 12:54:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:35:13.900 12:54:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:13.900 1+0 records in 00:35:13.900 1+0 records out 00:35:13.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369973 s, 11.1 MB/s 00:35:13.900 12:54:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:13.900 12:54:56 -- common/autotest_common.sh@874 -- # size=4096 00:35:13.900 12:54:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:13.900 12:54:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:35:13.900 12:54:56 -- common/autotest_common.sh@877 -- # return 0 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:35:13.900 12:54:56 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:14.158 { 00:35:14.158 "nbd_device": "/dev/nbd0", 00:35:14.158 "bdev_name": "raid5f" 00:35:14.158 } 00:35:14.158 ]' 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:14.158 { 00:35:14.158 "nbd_device": "/dev/nbd0", 00:35:14.158 "bdev_name": "raid5f" 00:35:14.158 } 00:35:14.158 ]' 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@51 -- # local i 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:14.158 12:54:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@41 -- # break 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@45 -- # return 0 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:14.417 12:54:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@65 -- # true 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@65 -- # count=0 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@122 -- # count=0 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@127 -- # return 0 00:35:14.676 12:54:57 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@12 -- # local i 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:14.676 12:54:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:35:14.934 /dev/nbd0 00:35:14.934 12:54:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:14.934 12:54:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:14.934 12:54:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:35:14.934 12:54:57 -- common/autotest_common.sh@857 -- # local i 00:35:14.934 12:54:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:35:14.934 12:54:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:35:14.934 12:54:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:35:14.934 12:54:57 -- common/autotest_common.sh@861 -- # break 00:35:14.935 12:54:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:35:14.935 12:54:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:35:14.935 12:54:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:14.935 1+0 records in 00:35:14.935 1+0 records out 00:35:14.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271609 s, 15.1 MB/s 00:35:14.935 12:54:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:14.935 12:54:57 -- common/autotest_common.sh@874 -- # size=4096 00:35:14.935 12:54:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:14.935 12:54:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:35:14.935 12:54:57 -- common/autotest_common.sh@877 -- # return 0 00:35:14.935 12:54:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:14.935 12:54:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:14.935 12:54:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:14.935 12:54:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:14.935 12:54:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:15.194 { 00:35:15.194 "nbd_device": "/dev/nbd0", 00:35:15.194 "bdev_name": "raid5f" 00:35:15.194 } 00:35:15.194 ]' 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:15.194 { 00:35:15.194 "nbd_device": "/dev/nbd0", 00:35:15.194 "bdev_name": "raid5f" 00:35:15.194 } 00:35:15.194 ]' 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@65 -- # count=1 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@66 -- # echo 1 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@95 -- # count=1 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:15.194 256+0 records in 00:35:15.194 256+0 records out 00:35:15.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136857 s, 76.6 MB/s 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:15.194 256+0 records in 00:35:15.194 256+0 records out 00:35:15.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289154 s, 36.3 MB/s 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@51 -- # local i 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:15.194 12:54:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@41 -- # break 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@45 -- # return 0 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:15.454 12:54:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@65 -- # true 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@65 -- # count=0 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@104 -- # count=0 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@109 -- # return 0 00:35:15.714 12:54:58 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:35:15.714 12:54:58 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:35:15.973 malloc_lvol_verify 00:35:15.973 12:54:58 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:35:16.231 45e5e9f7-8408-4187-b9c2-d72d07278675 00:35:16.231 12:54:58 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:35:16.231 3d7a148b-bb2f-45a1-a3f0-ae1892593f19 00:35:16.231 12:54:58 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:35:16.489 /dev/nbd0 00:35:16.489 12:54:58 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:35:16.489 mke2fs 1.46.5 (30-Dec-2021) 00:35:16.489 00:35:16.489 Filesystem too small for a journal 00:35:16.489 Discarding device blocks: 0/1024 done 00:35:16.489 Creating filesystem with 1024 4k blocks and 1024 inodes 00:35:16.489 00:35:16.489 Allocating group tables: 0/1 done 00:35:16.489 Writing inode tables: 0/1 done 00:35:16.489 Writing superblocks and filesystem accounting information: 0/1 done 00:35:16.489 00:35:16.489 12:54:58 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:35:16.489 12:54:58 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:16.489 12:54:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:16.489 12:54:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:16.489 12:54:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:16.489 12:54:58 -- bdev/nbd_common.sh@51 -- # local i 00:35:16.489 12:54:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:16.490 12:54:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:16.756 12:54:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:16.756 12:54:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:16.756 12:54:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:16.756 12:54:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:16.756 12:54:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:16.756 12:54:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:16.756 12:54:59 -- bdev/nbd_common.sh@41 -- # break 00:35:16.756 12:54:59 -- bdev/nbd_common.sh@45 -- # return 0 00:35:16.756 12:54:59 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:35:16.756 12:54:59 -- bdev/nbd_common.sh@147 -- # return 0 00:35:16.756 12:54:59 -- bdev/blockdev.sh@324 -- # killprocess 141948 00:35:16.756 12:54:59 -- common/autotest_common.sh@926 -- # '[' -z 141948 ']' 00:35:16.756 12:54:59 -- common/autotest_common.sh@930 -- # kill -0 141948 00:35:16.756 12:54:59 -- common/autotest_common.sh@931 -- # uname 00:35:16.756 12:54:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:16.756 12:54:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141948 00:35:16.756 12:54:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:16.756 12:54:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:16.756 12:54:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141948' 00:35:16.756 killing process with pid 141948 00:35:16.756 12:54:59 -- common/autotest_common.sh@945 -- # kill 141948 00:35:16.756 12:54:59 -- common/autotest_common.sh@950 -- # wait 141948 00:35:18.661 12:55:00 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:35:18.661 00:35:18.661 real 0m6.344s 00:35:18.661 user 0m8.133s 00:35:18.661 sys 0m1.530s 00:35:18.661 12:55:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:18.661 ************************************ 00:35:18.661 END TEST bdev_nbd 00:35:18.661 ************************************ 00:35:18.661 12:55:00 -- common/autotest_common.sh@10 -- # set +x 00:35:18.661 12:55:00 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:35:18.661 12:55:00 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:35:18.661 12:55:00 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:35:18.661 12:55:00 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:35:18.661 12:55:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:18.661 12:55:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:18.661 12:55:00 -- common/autotest_common.sh@10 -- # set +x 00:35:18.661 ************************************ 00:35:18.661 START TEST bdev_fio 00:35:18.661 ************************************ 00:35:18.661 12:55:00 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:35:18.661 12:55:00 -- bdev/blockdev.sh@329 -- # local env_context 00:35:18.661 12:55:00 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:35:18.661 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:35:18.661 12:55:00 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:35:18.661 12:55:00 -- bdev/blockdev.sh@337 -- # echo '' 00:35:18.661 12:55:00 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:35:18.661 12:55:00 -- bdev/blockdev.sh@337 -- # env_context= 00:35:18.661 12:55:00 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:35:18.661 12:55:00 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:18.661 12:55:00 -- common/autotest_common.sh@1260 -- # local workload=verify 00:35:18.661 12:55:00 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:35:18.661 12:55:00 -- common/autotest_common.sh@1262 -- # local env_context= 00:35:18.661 12:55:00 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:35:18.661 12:55:00 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:35:18.661 12:55:00 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:35:18.661 12:55:00 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:35:18.661 12:55:00 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:18.661 12:55:00 -- common/autotest_common.sh@1280 -- # cat 00:35:18.661 12:55:00 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:35:18.661 12:55:00 -- common/autotest_common.sh@1293 -- # cat 00:35:18.661 12:55:00 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:35:18.661 12:55:00 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:35:18.661 12:55:01 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:35:18.661 12:55:01 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:35:18.661 12:55:01 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:35:18.661 12:55:01 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:35:18.661 12:55:01 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:35:18.661 12:55:01 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:35:18.661 12:55:01 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:18.661 12:55:01 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:35:18.661 12:55:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:18.661 12:55:01 -- common/autotest_common.sh@10 -- # set +x 00:35:18.661 ************************************ 00:35:18.661 START TEST bdev_fio_rw_verify 00:35:18.661 ************************************ 00:35:18.661 12:55:01 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:18.661 12:55:01 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:18.661 12:55:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:18.661 12:55:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:18.661 12:55:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:18.661 12:55:01 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:18.661 12:55:01 -- common/autotest_common.sh@1320 -- # shift 00:35:18.661 12:55:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:18.661 12:55:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.661 12:55:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:18.661 12:55:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:18.661 12:55:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:18.661 12:55:01 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:35:18.661 12:55:01 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:35:18.661 12:55:01 -- common/autotest_common.sh@1326 -- # break 00:35:18.661 12:55:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:18.661 12:55:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:18.919 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:18.919 fio-3.35 00:35:18.919 Starting 1 thread 00:35:31.180 00:35:31.180 job_raid5f: (groupid=0, jobs=1): err= 0: pid=142196: Tue Oct 1 12:55:12 2024 00:35:31.180 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(432MiB/10001msec) 00:35:31.180 slat (usec): min=18, max=362, avg=21.92, stdev= 3.15 00:35:31.180 clat (usec): min=11, max=577, avg=143.91, stdev=53.83 00:35:31.180 lat (usec): min=30, max=599, avg=165.83, stdev=54.45 00:35:31.180 clat percentiles (usec): 00:35:31.180 | 50.000th=[ 141], 99.000th=[ 258], 99.900th=[ 297], 99.990th=[ 367], 00:35:31.180 | 99.999th=[ 553] 00:35:31.180 write: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(446MiB/9878msec); 0 zone resets 00:35:31.180 slat (usec): min=7, max=278, avg=18.48, stdev= 4.70 00:35:31.180 clat (usec): min=58, max=1542, avg=328.33, stdev=56.53 00:35:31.180 lat (usec): min=74, max=1562, avg=346.81, stdev=58.64 00:35:31.180 clat percentiles (usec): 00:35:31.180 | 50.000th=[ 326], 99.000th=[ 461], 99.900th=[ 742], 99.990th=[ 1156], 00:35:31.180 | 99.999th=[ 1532] 00:35:31.180 bw ( KiB/s): min=39776, max=50432, per=98.41%, avg=45523.37, stdev=3173.49, samples=19 00:35:31.180 iops : min= 9944, max=12608, avg=11380.84, stdev=793.37, samples=19 00:35:31.180 lat (usec) : 20=0.01%, 50=0.01%, 100=13.34%, 250=37.95%, 500=48.44% 00:35:31.180 lat (usec) : 750=0.21%, 1000=0.03% 00:35:31.180 lat (msec) : 2=0.02% 00:35:31.180 cpu : usr=99.70%, sys=0.23%, ctx=122, majf=0, minf=7844 00:35:31.180 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:31.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.180 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.180 issued rwts: total=110604,114236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.180 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:31.180 00:35:31.180 Run status group 0 (all jobs): 00:35:31.180 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=432MiB (453MB), run=10001-10001msec 00:35:31.180 WRITE: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=446MiB (468MB), run=9878-9878msec 00:35:31.745 ----------------------------------------------------- 00:35:31.745 Suppressions used: 00:35:31.745 count bytes template 00:35:31.745 1 7 /usr/src/fio/parse.c 00:35:31.745 199 19104 /usr/src/fio/iolog.c 00:35:31.745 1 904 libcrypto.so 00:35:31.745 ----------------------------------------------------- 00:35:31.745 00:35:31.745 00:35:31.745 real 0m13.170s 00:35:31.745 user 0m13.816s 00:35:31.745 sys 0m0.707s 00:35:31.745 ************************************ 00:35:31.745 END TEST bdev_fio_rw_verify 00:35:31.745 ************************************ 00:35:31.746 12:55:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:31.746 12:55:14 -- common/autotest_common.sh@10 -- # set +x 00:35:32.004 12:55:14 -- bdev/blockdev.sh@348 -- # rm -f 00:35:32.004 12:55:14 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:32.004 12:55:14 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:35:32.004 12:55:14 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:32.004 12:55:14 -- common/autotest_common.sh@1260 -- # local workload=trim 00:35:32.004 12:55:14 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:35:32.004 12:55:14 -- common/autotest_common.sh@1262 -- # local env_context= 00:35:32.004 12:55:14 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:35:32.004 12:55:14 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:35:32.004 12:55:14 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:35:32.004 12:55:14 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:35:32.004 12:55:14 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:32.004 12:55:14 -- common/autotest_common.sh@1280 -- # cat 00:35:32.004 12:55:14 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:35:32.004 12:55:14 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:35:32.004 12:55:14 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:35:32.004 12:55:14 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:35:32.004 12:55:14 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "bc04ac2e-0519-4cbd-81fd-ddd1be6489d3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bc04ac2e-0519-4cbd-81fd-ddd1be6489d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "bc04ac2e-0519-4cbd-81fd-ddd1be6489d3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "377436f3-80c7-4497-bdc9-e6c713c8a197",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ba0df14c-378c-4e97-b08f-27e1c53d15d7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "9e1a71c6-4915-4a28-9af3-7c92c0014cef",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:35:32.004 12:55:14 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:35:32.004 12:55:14 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:32.004 12:55:14 -- bdev/blockdev.sh@360 -- # popd 00:35:32.004 /home/vagrant/spdk_repo/spdk 00:35:32.004 12:55:14 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:35:32.004 12:55:14 -- bdev/blockdev.sh@362 -- # return 0 00:35:32.004 00:35:32.004 real 0m13.420s 00:35:32.004 user 0m13.950s 00:35:32.004 sys 0m0.818s 00:35:32.004 12:55:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:32.004 12:55:14 -- common/autotest_common.sh@10 -- # set +x 00:35:32.004 ************************************ 00:35:32.004 END TEST bdev_fio 00:35:32.004 ************************************ 00:35:32.004 12:55:14 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:32.004 12:55:14 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:32.004 12:55:14 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:35:32.004 12:55:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:32.004 12:55:14 -- common/autotest_common.sh@10 -- # set +x 00:35:32.004 ************************************ 00:35:32.004 START TEST bdev_verify 00:35:32.004 ************************************ 00:35:32.004 12:55:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:32.004 [2024-10-01 12:55:14.538668] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:35:32.004 [2024-10-01 12:55:14.539067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142371 ] 00:35:32.262 [2024-10-01 12:55:14.707150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:32.520 [2024-10-01 12:55:15.008993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.520 [2024-10-01 12:55:15.008999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.451 Running I/O for 5 seconds... 00:35:38.719 00:35:38.719 Latency(us) 00:35:38.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.719 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:38.719 Verification LBA range: start 0x0 length 0x2000 00:35:38.719 raid5f : 5.01 7916.45 30.92 0.00 0.00 25621.45 424.40 53692.14 00:35:38.719 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:38.719 Verification LBA range: start 0x2000 length 0x2000 00:35:38.719 raid5f : 5.02 5238.11 20.46 0.00 0.00 38723.94 284.58 30109.71 00:35:38.719 =================================================================================================================== 00:35:38.719 Total : 13154.56 51.39 0.00 0.00 30840.44 284.58 53692.14 00:35:40.104 ************************************ 00:35:40.104 END TEST bdev_verify 00:35:40.104 ************************************ 00:35:40.104 00:35:40.104 real 0m8.060s 00:35:40.104 user 0m14.506s 00:35:40.104 sys 0m0.416s 00:35:40.104 12:55:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:40.104 12:55:22 -- common/autotest_common.sh@10 -- # set +x 00:35:40.104 12:55:22 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:40.104 12:55:22 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:35:40.104 12:55:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:40.104 12:55:22 -- common/autotest_common.sh@10 -- # set +x 00:35:40.104 ************************************ 00:35:40.104 START TEST bdev_verify_big_io 00:35:40.104 ************************************ 00:35:40.104 12:55:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:40.363 [2024-10-01 12:55:22.665984] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:35:40.363 [2024-10-01 12:55:22.666424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142486 ] 00:35:40.363 [2024-10-01 12:55:22.846391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:40.621 [2024-10-01 12:55:23.122950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.621 [2024-10-01 12:55:23.122953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.556 Running I/O for 5 seconds... 00:35:46.829 00:35:46.830 Latency(us) 00:35:46.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.830 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:46.830 Verification LBA range: start 0x0 length 0x200 00:35:46.830 raid5f : 5.18 605.41 37.84 0.00 0.00 5517950.04 160.39 172657.09 00:35:46.830 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:46.830 Verification LBA range: start 0x200 length 0x200 00:35:46.830 raid5f : 5.21 448.84 28.05 0.00 0.00 7420162.61 223.72 225717.56 00:35:46.830 =================================================================================================================== 00:35:46.830 Total : 1054.25 65.89 0.00 0.00 6330501.82 160.39 225717.56 00:35:48.741 00:35:48.741 real 0m8.332s 00:35:48.741 user 0m15.067s 00:35:48.741 sys 0m0.384s 00:35:48.741 12:55:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:48.741 12:55:30 -- common/autotest_common.sh@10 -- # set +x 00:35:48.741 ************************************ 00:35:48.741 END TEST bdev_verify_big_io 00:35:48.741 ************************************ 00:35:48.741 12:55:30 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:48.741 12:55:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:35:48.741 12:55:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:48.741 12:55:30 -- common/autotest_common.sh@10 -- # set +x 00:35:48.741 ************************************ 00:35:48.741 START TEST bdev_write_zeroes 00:35:48.741 ************************************ 00:35:48.741 12:55:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:48.741 [2024-10-01 12:55:31.080935] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:35:48.741 [2024-10-01 12:55:31.081294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142606 ] 00:35:48.741 [2024-10-01 12:55:31.250944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.000 [2024-10-01 12:55:31.529712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:49.937 Running I/O for 1 seconds... 00:35:50.874 00:35:50.874 Latency(us) 00:35:50.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.874 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:50.874 raid5f : 1.01 23718.31 92.65 0.00 0.00 5378.41 1546.28 7106.31 00:35:50.874 =================================================================================================================== 00:35:50.874 Total : 23718.31 92.65 0.00 0.00 5378.41 1546.28 7106.31 00:35:52.778 ************************************ 00:35:52.778 END TEST bdev_write_zeroes 00:35:52.778 ************************************ 00:35:52.778 00:35:52.778 real 0m4.126s 00:35:52.778 user 0m3.685s 00:35:52.778 sys 0m0.321s 00:35:52.778 12:55:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:52.778 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:35:52.778 12:55:35 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:52.778 12:55:35 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:35:52.778 12:55:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:52.778 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:35:52.778 ************************************ 00:35:52.778 START TEST bdev_json_nonenclosed 00:35:52.778 ************************************ 00:35:52.778 12:55:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:52.778 [2024-10-01 12:55:35.292536] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:35:52.778 [2024-10-01 12:55:35.292697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142675 ] 00:35:53.037 [2024-10-01 12:55:35.458959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.296 [2024-10-01 12:55:35.728905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.296 [2024-10-01 12:55:35.729138] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:35:53.296 [2024-10-01 12:55:35.729180] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:53.864 00:35:53.864 real 0m1.031s 00:35:53.864 user 0m0.787s 00:35:53.864 sys 0m0.145s 00:35:53.864 12:55:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:53.864 12:55:36 -- common/autotest_common.sh@10 -- # set +x 00:35:53.864 ************************************ 00:35:53.864 END TEST bdev_json_nonenclosed 00:35:53.864 ************************************ 00:35:53.864 12:55:36 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:53.864 12:55:36 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:35:53.864 12:55:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:53.864 12:55:36 -- common/autotest_common.sh@10 -- # set +x 00:35:53.864 ************************************ 00:35:53.864 START TEST bdev_json_nonarray 00:35:53.864 ************************************ 00:35:53.864 12:55:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:54.124 [2024-10-01 12:55:36.408918] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:35:54.124 [2024-10-01 12:55:36.409812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142714 ] 00:35:54.124 [2024-10-01 12:55:36.578160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.382 [2024-10-01 12:55:36.828476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.382 [2024-10-01 12:55:36.828786] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:35:54.382 [2024-10-01 12:55:36.828833] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:54.949 00:35:54.949 real 0m1.011s 00:35:54.949 user 0m0.776s 00:35:54.949 sys 0m0.134s 00:35:54.949 12:55:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:54.949 ************************************ 00:35:54.949 END TEST bdev_json_nonarray 00:35:54.949 ************************************ 00:35:54.949 12:55:37 -- common/autotest_common.sh@10 -- # set +x 00:35:54.949 12:55:37 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:35:54.949 12:55:37 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:35:54.949 12:55:37 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:35:54.949 12:55:37 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:35:54.949 12:55:37 -- bdev/blockdev.sh@809 -- # cleanup 00:35:54.949 12:55:37 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:35:54.949 12:55:37 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:54.949 12:55:37 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:35:54.949 12:55:37 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:35:54.949 12:55:37 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:35:54.949 12:55:37 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:35:54.949 ************************************ 00:35:54.949 END TEST blockdev_raid5f 00:35:54.949 ************************************ 00:35:54.949 00:35:54.949 real 0m54.684s 00:35:54.949 user 1m13.650s 00:35:54.949 sys 0m5.866s 00:35:54.949 12:55:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:54.949 12:55:37 -- common/autotest_common.sh@10 -- # set +x 00:35:55.208 12:55:37 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:35:55.208 12:55:37 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:35:55.208 12:55:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:55.208 12:55:37 -- common/autotest_common.sh@10 -- # set +x 00:35:55.208 12:55:37 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:35:55.208 12:55:37 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:35:55.208 12:55:37 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:35:55.208 12:55:37 -- common/autotest_common.sh@10 -- # set +x 00:35:57.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:35:57.397 Waiting for block devices as requested 00:35:57.397 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:35:57.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:35:57.963 Cleaning 00:35:57.963 Removing: /var/run/dpdk/spdk0/config 00:35:58.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:58.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:58.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:58.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:58.222 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:58.222 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:58.222 Removing: /dev/shm/spdk_tgt_trace.pid103582 00:35:58.222 Removing: /var/run/dpdk/spdk0 00:35:58.222 Removing: /var/run/dpdk/spdk_pid103326 00:35:58.222 Removing: /var/run/dpdk/spdk_pid103582 00:35:58.222 Removing: /var/run/dpdk/spdk_pid103875 00:35:58.222 Removing: /var/run/dpdk/spdk_pid104129 00:35:58.222 Removing: /var/run/dpdk/spdk_pid104315 00:35:58.222 Removing: /var/run/dpdk/spdk_pid104437 00:35:58.222 Removing: /var/run/dpdk/spdk_pid104550 00:35:58.222 Removing: /var/run/dpdk/spdk_pid104676 00:35:58.222 Removing: /var/run/dpdk/spdk_pid104795 00:35:58.222 Removing: /var/run/dpdk/spdk_pid104846 00:35:58.222 Removing: /var/run/dpdk/spdk_pid104896 00:35:58.222 Removing: /var/run/dpdk/spdk_pid104967 00:35:58.222 Removing: /var/run/dpdk/spdk_pid105106 00:35:58.222 Removing: /var/run/dpdk/spdk_pid105603 00:35:58.222 Removing: /var/run/dpdk/spdk_pid105692 00:35:58.222 Removing: /var/run/dpdk/spdk_pid105782 00:35:58.222 Removing: /var/run/dpdk/spdk_pid105813 00:35:58.222 Removing: /var/run/dpdk/spdk_pid105974 00:35:58.222 Removing: /var/run/dpdk/spdk_pid105996 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106164 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106194 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106263 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106288 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106364 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106396 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106601 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106648 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106696 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106789 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106881 00:35:58.222 Removing: /var/run/dpdk/spdk_pid106925 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107018 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107060 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107117 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107166 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107211 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107258 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107310 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107353 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107410 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107452 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107509 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107547 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107604 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107647 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107705 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107745 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107797 00:35:58.222 Removing: /var/run/dpdk/spdk_pid107844 00:35:58.481 Removing: /var/run/dpdk/spdk_pid107898 00:35:58.481 Removing: /var/run/dpdk/spdk_pid107936 00:35:58.481 Removing: /var/run/dpdk/spdk_pid107988 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108035 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108082 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108129 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108181 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108224 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108282 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108323 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108381 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108417 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108474 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108515 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108566 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108613 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108673 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108718 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108773 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108819 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108872 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108919 00:35:58.481 Removing: /var/run/dpdk/spdk_pid108969 00:35:58.481 Removing: /var/run/dpdk/spdk_pid109067 00:35:58.481 Removing: /var/run/dpdk/spdk_pid109198 00:35:58.481 Removing: /var/run/dpdk/spdk_pid109379 00:35:58.481 Removing: /var/run/dpdk/spdk_pid109482 00:35:58.481 Removing: /var/run/dpdk/spdk_pid109545 00:35:58.481 Removing: /var/run/dpdk/spdk_pid110749 00:35:58.481 Removing: /var/run/dpdk/spdk_pid110978 00:35:58.481 Removing: /var/run/dpdk/spdk_pid111198 00:35:58.481 Removing: /var/run/dpdk/spdk_pid111330 00:35:58.481 Removing: /var/run/dpdk/spdk_pid111479 00:35:58.481 Removing: /var/run/dpdk/spdk_pid111562 00:35:58.481 Removing: /var/run/dpdk/spdk_pid111600 00:35:58.481 Removing: /var/run/dpdk/spdk_pid111638 00:35:58.481 Removing: /var/run/dpdk/spdk_pid112120 00:35:58.481 Removing: /var/run/dpdk/spdk_pid112207 00:35:58.481 Removing: /var/run/dpdk/spdk_pid112330 00:35:58.481 Removing: /var/run/dpdk/spdk_pid112401 00:35:58.481 Removing: /var/run/dpdk/spdk_pid113563 00:35:58.481 Removing: /var/run/dpdk/spdk_pid114420 00:35:58.481 Removing: /var/run/dpdk/spdk_pid115269 00:35:58.481 Removing: /var/run/dpdk/spdk_pid116333 00:35:58.481 Removing: /var/run/dpdk/spdk_pid117363 00:35:58.481 Removing: /var/run/dpdk/spdk_pid118387 00:35:58.481 Removing: /var/run/dpdk/spdk_pid119797 00:35:58.481 Removing: /var/run/dpdk/spdk_pid120942 00:35:58.481 Removing: /var/run/dpdk/spdk_pid122092 00:35:58.481 Removing: /var/run/dpdk/spdk_pid122729 00:35:58.481 Removing: /var/run/dpdk/spdk_pid123248 00:35:58.481 Removing: /var/run/dpdk/spdk_pid123861 00:35:58.481 Removing: /var/run/dpdk/spdk_pid124339 00:35:58.481 Removing: /var/run/dpdk/spdk_pid124896 00:35:58.481 Removing: /var/run/dpdk/spdk_pid125489 00:35:58.481 Removing: /var/run/dpdk/spdk_pid126115 00:35:58.481 Removing: /var/run/dpdk/spdk_pid126620 00:35:58.481 Removing: /var/run/dpdk/spdk_pid127956 00:35:58.481 Removing: /var/run/dpdk/spdk_pid128531 00:35:58.740 Removing: /var/run/dpdk/spdk_pid129071 00:35:58.740 Removing: /var/run/dpdk/spdk_pid130545 00:35:58.740 Removing: /var/run/dpdk/spdk_pid131189 00:35:58.740 Removing: /var/run/dpdk/spdk_pid131803 00:35:58.740 Removing: /var/run/dpdk/spdk_pid132549 00:35:58.740 Removing: /var/run/dpdk/spdk_pid132616 00:35:58.740 Removing: /var/run/dpdk/spdk_pid132674 00:35:58.740 Removing: /var/run/dpdk/spdk_pid132744 00:35:58.740 Removing: /var/run/dpdk/spdk_pid132892 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133056 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133281 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133590 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133605 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133665 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133699 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133732 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133773 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133805 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133838 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133877 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133911 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133944 00:35:58.740 Removing: /var/run/dpdk/spdk_pid133983 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134015 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134049 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134089 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134120 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134149 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134195 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134223 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134256 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134315 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134351 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134401 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134489 00:35:58.740 Removing: /var/run/dpdk/spdk_pid134542 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134574 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134622 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134655 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134689 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134758 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134790 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134834 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134867 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134903 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134932 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134964 00:35:58.741 Removing: /var/run/dpdk/spdk_pid134992 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135024 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135053 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135105 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135165 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135198 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135248 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135287 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135316 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135381 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135419 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135469 00:35:58.741 Removing: /var/run/dpdk/spdk_pid135509 00:35:58.999 Removing: /var/run/dpdk/spdk_pid135538 00:35:58.999 Removing: /var/run/dpdk/spdk_pid135574 00:35:58.999 Removing: /var/run/dpdk/spdk_pid135605 00:35:58.999 Removing: /var/run/dpdk/spdk_pid135636 00:35:58.999 Removing: /var/run/dpdk/spdk_pid135670 00:35:58.999 Removing: /var/run/dpdk/spdk_pid135699 00:35:58.999 Removing: /var/run/dpdk/spdk_pid135806 00:35:58.999 Removing: /var/run/dpdk/spdk_pid135911 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136079 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136124 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136181 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136254 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136292 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136335 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136369 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136425 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136459 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136562 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136633 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136697 00:35:58.999 Removing: /var/run/dpdk/spdk_pid136975 00:35:58.999 Removing: /var/run/dpdk/spdk_pid137119 00:35:58.999 Removing: /var/run/dpdk/spdk_pid137172 00:35:58.999 Removing: /var/run/dpdk/spdk_pid137266 00:35:58.999 Removing: /var/run/dpdk/spdk_pid137373 00:35:58.999 Removing: /var/run/dpdk/spdk_pid137430 00:35:58.999 Removing: /var/run/dpdk/spdk_pid137695 00:35:58.999 Removing: /var/run/dpdk/spdk_pid137845 00:35:58.999 Removing: /var/run/dpdk/spdk_pid137960 00:35:58.999 Removing: /var/run/dpdk/spdk_pid138026 00:35:58.999 Removing: /var/run/dpdk/spdk_pid138065 00:35:58.999 Removing: /var/run/dpdk/spdk_pid138148 00:35:58.999 Removing: /var/run/dpdk/spdk_pid138603 00:35:58.999 Removing: /var/run/dpdk/spdk_pid138653 00:35:58.999 Removing: /var/run/dpdk/spdk_pid138982 00:35:58.999 Removing: /var/run/dpdk/spdk_pid139109 00:35:58.999 Removing: /var/run/dpdk/spdk_pid139224 00:35:58.999 Removing: /var/run/dpdk/spdk_pid139286 00:35:58.999 Removing: /var/run/dpdk/spdk_pid139325 00:35:58.999 Removing: /var/run/dpdk/spdk_pid139363 00:35:58.999 Removing: /var/run/dpdk/spdk_pid140757 00:35:58.999 Removing: /var/run/dpdk/spdk_pid140905 00:35:58.999 Removing: /var/run/dpdk/spdk_pid140919 00:35:58.999 Removing: /var/run/dpdk/spdk_pid140937 00:35:58.999 Removing: /var/run/dpdk/spdk_pid141440 00:35:58.999 Removing: /var/run/dpdk/spdk_pid141560 00:35:58.999 Removing: /var/run/dpdk/spdk_pid141720 00:35:58.999 Removing: /var/run/dpdk/spdk_pid141808 00:35:58.999 Removing: /var/run/dpdk/spdk_pid141865 00:35:58.999 Removing: /var/run/dpdk/spdk_pid142176 00:35:58.999 Removing: /var/run/dpdk/spdk_pid142371 00:35:58.999 Removing: /var/run/dpdk/spdk_pid142486 00:35:58.999 Removing: /var/run/dpdk/spdk_pid142606 00:35:58.999 Removing: /var/run/dpdk/spdk_pid142675 00:35:58.999 Removing: /var/run/dpdk/spdk_pid142714 00:35:58.999 Clean 00:35:59.258 killing process with pid 92466 00:35:59.258 killing process with pid 92467 00:35:59.258 12:55:41 -- common/autotest_common.sh@1436 -- # return 0 00:35:59.258 12:55:41 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:35:59.258 12:55:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:59.258 12:55:41 -- common/autotest_common.sh@10 -- # set +x 00:35:59.258 12:55:41 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:35:59.258 12:55:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:59.258 12:55:41 -- common/autotest_common.sh@10 -- # set +x 00:35:59.518 12:55:41 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:59.518 12:55:41 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:59.518 12:55:41 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:59.518 12:55:41 -- spdk/autotest.sh@394 -- # hash lcov 00:35:59.518 12:55:41 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:59.518 12:55:41 -- spdk/autotest.sh@396 -- # hostname 00:35:59.518 12:55:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:59.518 geninfo: WARNING: invalid characters removed from testname! 00:36:46.231 12:56:24 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:47.609 12:56:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:50.173 12:56:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:53.462 12:56:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:55.995 12:56:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:00.176 12:56:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:02.753 12:56:44 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:02.753 12:56:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:02.753 12:56:44 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:02.753 12:56:44 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.753 12:56:44 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.754 12:56:44 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:02.754 12:56:44 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:02.754 12:56:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:02.754 12:56:44 -- paths/export.sh@5 -- $ export PATH 00:37:02.754 12:56:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:02.754 12:56:44 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:37:02.754 12:56:44 -- common/autobuild_common.sh@440 -- $ date +%s 00:37:02.754 12:56:44 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1727787404.XXXXXX 00:37:02.754 12:56:44 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1727787404.To1vIK 00:37:02.754 12:56:44 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:37:02.754 12:56:44 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:37:02.754 12:56:44 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:37:02.754 12:56:44 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:37:02.754 12:56:44 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:37:02.754 12:56:44 -- common/autobuild_common.sh@456 -- $ get_config_params 00:37:02.754 12:56:44 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:37:02.754 12:56:44 -- common/autotest_common.sh@10 -- $ set +x 00:37:02.754 12:56:44 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:37:02.754 12:56:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:37:02.754 12:56:44 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:37:02.754 12:56:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:02.754 12:56:44 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:02.754 12:56:44 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:02.754 12:56:44 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:37:02.754 12:56:44 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:37:02.754 12:56:44 -- common/autotest_common.sh@10 -- $ set +x 00:37:02.754 12:56:44 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:37:02.754 12:56:44 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:37:02.754 12:56:44 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:37:02.754 12:56:44 -- spdk/autopackage.sh@40 -- $ get_config_params 00:37:02.754 12:56:44 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:37:02.754 12:56:44 -- common/autotest_common.sh@10 -- $ set +x 00:37:02.754 12:56:44 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:37:02.754 12:56:44 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto 00:37:02.754 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:37:02.754 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:37:03.013 Using 'verbs' RDMA provider 00:37:18.492 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:37:30.697 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:37:30.697 Creating mk/config.mk...done. 00:37:30.697 Creating mk/cc.flags.mk...done. 00:37:30.697 Type 'make' to build. 00:37:30.697 12:57:12 -- spdk/autopackage.sh@43 -- $ make -j10 00:37:30.956 make[1]: Nothing to be done for 'all'. 00:37:36.251 The Meson build system 00:37:36.251 Version: 1.4.0 00:37:36.251 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:37:36.251 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:37:36.251 Build type: native build 00:37:36.251 Program cat found: YES (/usr/bin/cat) 00:37:36.251 Project name: DPDK 00:37:36.251 Project version: 23.11.0 00:37:36.251 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:37:36.251 C linker for the host machine: cc ld.bfd 2.38 00:37:36.251 Host machine cpu family: x86_64 00:37:36.251 Host machine cpu: x86_64 00:37:36.251 Message: ## Building in Developer Mode ## 00:37:36.251 Program pkg-config found: YES (/usr/bin/pkg-config) 00:37:36.251 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:37:36.251 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:37:36.251 Program python3 found: YES (/usr/bin/python3) 00:37:36.251 Program cat found: YES (/usr/bin/cat) 00:37:36.251 Compiler for C supports arguments -march=native: YES 00:37:36.251 Checking for size of "void *" : 8 00:37:36.251 Checking for size of "void *" : 8 (cached) 00:37:36.251 Library m found: YES 00:37:36.251 Library numa found: YES 00:37:36.251 Has header "numaif.h" : YES 00:37:36.251 Library fdt found: NO 00:37:36.251 Library execinfo found: NO 00:37:36.251 Has header "execinfo.h" : YES 00:37:36.251 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:37:36.251 Run-time dependency libarchive found: NO (tried pkgconfig) 00:37:36.251 Run-time dependency libbsd found: NO (tried pkgconfig) 00:37:36.251 Run-time dependency jansson found: NO (tried pkgconfig) 00:37:36.251 Run-time dependency openssl found: YES 3.0.2 00:37:36.251 Run-time dependency libpcap found: NO (tried pkgconfig) 00:37:36.251 Library pcap found: NO 00:37:36.251 Compiler for C supports arguments -Wcast-qual: YES 00:37:36.251 Compiler for C supports arguments -Wdeprecated: YES 00:37:36.251 Compiler for C supports arguments -Wformat: YES 00:37:36.251 Compiler for C supports arguments -Wformat-nonliteral: YES 00:37:36.251 Compiler for C supports arguments -Wformat-security: YES 00:37:36.251 Compiler for C supports arguments -Wmissing-declarations: YES 00:37:36.251 Compiler for C supports arguments -Wmissing-prototypes: YES 00:37:36.251 Compiler for C supports arguments -Wnested-externs: YES 00:37:36.251 Compiler for C supports arguments -Wold-style-definition: YES 00:37:36.251 Compiler for C supports arguments -Wpointer-arith: YES 00:37:36.251 Compiler for C supports arguments -Wsign-compare: YES 00:37:36.251 Compiler for C supports arguments -Wstrict-prototypes: YES 00:37:36.251 Compiler for C supports arguments -Wundef: YES 00:37:36.251 Compiler for C supports arguments -Wwrite-strings: YES 00:37:36.251 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:37:36.251 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:37:36.251 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:37:36.251 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:37:36.251 Program objdump found: YES (/usr/bin/objdump) 00:37:36.251 Compiler for C supports arguments -mavx512f: YES 00:37:36.251 Checking if "AVX512 checking" compiles: YES 00:37:36.251 Fetching value of define "__SSE4_2__" : 1 00:37:36.251 Fetching value of define "__AES__" : 1 00:37:36.251 Fetching value of define "__AVX__" : 1 00:37:36.251 Fetching value of define "__AVX2__" : 1 00:37:36.251 Fetching value of define "__AVX512BW__" : 1 00:37:36.251 Fetching value of define "__AVX512CD__" : 1 00:37:36.251 Fetching value of define "__AVX512DQ__" : 1 00:37:36.251 Fetching value of define "__AVX512F__" : 1 00:37:36.251 Fetching value of define "__AVX512VL__" : 1 00:37:36.251 Fetching value of define "__PCLMUL__" : 1 00:37:36.251 Fetching value of define "__RDRND__" : 1 00:37:36.251 Fetching value of define "__RDSEED__" : 1 00:37:36.251 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:37:36.251 Fetching value of define "__znver1__" : (undefined) 00:37:36.251 Fetching value of define "__znver2__" : (undefined) 00:37:36.251 Fetching value of define "__znver3__" : (undefined) 00:37:36.251 Fetching value of define "__znver4__" : (undefined) 00:37:36.251 Compiler for C supports arguments -ffat-lto-objects: YES 00:37:36.251 Library asan found: YES 00:37:36.251 Compiler for C supports arguments -Wno-format-truncation: YES 00:37:36.251 Message: lib/log: Defining dependency "log" 00:37:36.251 Message: lib/kvargs: Defining dependency "kvargs" 00:37:36.251 Message: lib/telemetry: Defining dependency "telemetry" 00:37:36.251 Library rt found: YES 00:37:36.251 Checking for function "getentropy" : NO 00:37:36.251 Message: lib/eal: Defining dependency "eal" 00:37:36.251 Message: lib/ring: Defining dependency "ring" 00:37:36.251 Message: lib/rcu: Defining dependency "rcu" 00:37:36.251 Message: lib/mempool: Defining dependency "mempool" 00:37:36.251 Message: lib/mbuf: Defining dependency "mbuf" 00:37:36.251 Fetching value of define "__PCLMUL__" : 1 (cached) 00:37:36.251 Fetching value of define "__AVX512F__" : 1 (cached) 00:37:36.251 Fetching value of define "__AVX512BW__" : 1 (cached) 00:37:36.251 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:37:36.251 Fetching value of define "__AVX512VL__" : 1 (cached) 00:37:36.251 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:37:36.251 Compiler for C supports arguments -mpclmul: YES 00:37:36.252 Compiler for C supports arguments -maes: YES 00:37:36.252 Compiler for C supports arguments -mavx512f: YES (cached) 00:37:36.252 Compiler for C supports arguments -mavx512bw: YES 00:37:36.252 Compiler for C supports arguments -mavx512dq: YES 00:37:36.252 Compiler for C supports arguments -mavx512vl: YES 00:37:36.252 Compiler for C supports arguments -mvpclmulqdq: YES 00:37:36.252 Compiler for C supports arguments -mavx2: YES 00:37:36.252 Compiler for C supports arguments -mavx: YES 00:37:36.252 Message: lib/net: Defining dependency "net" 00:37:36.252 Message: lib/meter: Defining dependency "meter" 00:37:36.252 Message: lib/ethdev: Defining dependency "ethdev" 00:37:36.252 Message: lib/pci: Defining dependency "pci" 00:37:36.252 Message: lib/cmdline: Defining dependency "cmdline" 00:37:36.252 Message: lib/hash: Defining dependency "hash" 00:37:36.252 Message: lib/timer: Defining dependency "timer" 00:37:36.252 Message: lib/compressdev: Defining dependency "compressdev" 00:37:36.252 Message: lib/cryptodev: Defining dependency "cryptodev" 00:37:36.252 Message: lib/dmadev: Defining dependency "dmadev" 00:37:36.252 Compiler for C supports arguments -Wno-cast-qual: YES 00:37:36.252 Message: lib/power: Defining dependency "power" 00:37:36.252 Message: lib/reorder: Defining dependency "reorder" 00:37:36.252 Message: lib/security: Defining dependency "security" 00:37:36.252 Has header "linux/userfaultfd.h" : YES 00:37:36.252 Has header "linux/vduse.h" : YES 00:37:36.252 Message: lib/vhost: Defining dependency "vhost" 00:37:36.252 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:37:36.252 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:37:36.252 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:37:36.252 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:37:36.252 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:37:36.252 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:37:36.252 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:37:36.252 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:37:36.252 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:37:36.252 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:37:36.252 Program doxygen found: YES (/usr/bin/doxygen) 00:37:36.252 Configuring doxy-api-html.conf using configuration 00:37:36.252 Configuring doxy-api-man.conf using configuration 00:37:36.252 Program mandb found: YES (/usr/bin/mandb) 00:37:36.252 Program sphinx-build found: NO 00:37:36.252 Configuring rte_build_config.h using configuration 00:37:36.252 Message: 00:37:36.252 ================= 00:37:36.252 Applications Enabled 00:37:36.252 ================= 00:37:36.252 00:37:36.252 apps: 00:37:36.252 00:37:36.252 00:37:36.252 Message: 00:37:36.252 ================= 00:37:36.252 Libraries Enabled 00:37:36.252 ================= 00:37:36.252 00:37:36.252 libs: 00:37:36.252 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:37:36.252 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:37:36.252 cryptodev, dmadev, power, reorder, security, vhost, 00:37:36.252 00:37:36.252 Message: 00:37:36.252 =============== 00:37:36.252 Drivers Enabled 00:37:36.252 =============== 00:37:36.252 00:37:36.252 common: 00:37:36.252 00:37:36.252 bus: 00:37:36.252 pci, vdev, 00:37:36.252 mempool: 00:37:36.252 ring, 00:37:36.252 dma: 00:37:36.252 00:37:36.252 net: 00:37:36.252 00:37:36.252 crypto: 00:37:36.252 00:37:36.252 compress: 00:37:36.252 00:37:36.252 vdpa: 00:37:36.252 00:37:36.252 00:37:36.252 Message: 00:37:36.252 ================= 00:37:36.252 Content Skipped 00:37:36.252 ================= 00:37:36.252 00:37:36.252 apps: 00:37:36.252 dumpcap: explicitly disabled via build config 00:37:36.252 graph: explicitly disabled via build config 00:37:36.252 pdump: explicitly disabled via build config 00:37:36.252 proc-info: explicitly disabled via build config 00:37:36.252 test-acl: explicitly disabled via build config 00:37:36.252 test-bbdev: explicitly disabled via build config 00:37:36.252 test-cmdline: explicitly disabled via build config 00:37:36.252 test-compress-perf: explicitly disabled via build config 00:37:36.252 test-crypto-perf: explicitly disabled via build config 00:37:36.252 test-dma-perf: explicitly disabled via build config 00:37:36.252 test-eventdev: explicitly disabled via build config 00:37:36.252 test-fib: explicitly disabled via build config 00:37:36.252 test-flow-perf: explicitly disabled via build config 00:37:36.252 test-gpudev: explicitly disabled via build config 00:37:36.252 test-mldev: explicitly disabled via build config 00:37:36.252 test-pipeline: explicitly disabled via build config 00:37:36.252 test-pmd: explicitly disabled via build config 00:37:36.252 test-regex: explicitly disabled via build config 00:37:36.252 test-sad: explicitly disabled via build config 00:37:36.252 test-security-perf: explicitly disabled via build config 00:37:36.252 00:37:36.252 libs: 00:37:36.252 metrics: explicitly disabled via build config 00:37:36.252 acl: explicitly disabled via build config 00:37:36.252 bbdev: explicitly disabled via build config 00:37:36.252 bitratestats: explicitly disabled via build config 00:37:36.252 bpf: explicitly disabled via build config 00:37:36.252 cfgfile: explicitly disabled via build config 00:37:36.252 distributor: explicitly disabled via build config 00:37:36.252 efd: explicitly disabled via build config 00:37:36.252 eventdev: explicitly disabled via build config 00:37:36.252 dispatcher: explicitly disabled via build config 00:37:36.252 gpudev: explicitly disabled via build config 00:37:36.252 gro: explicitly disabled via build config 00:37:36.252 gso: explicitly disabled via build config 00:37:36.252 ip_frag: explicitly disabled via build config 00:37:36.252 jobstats: explicitly disabled via build config 00:37:36.252 latencystats: explicitly disabled via build config 00:37:36.252 lpm: explicitly disabled via build config 00:37:36.252 member: explicitly disabled via build config 00:37:36.252 pcapng: explicitly disabled via build config 00:37:36.252 rawdev: explicitly disabled via build config 00:37:36.252 regexdev: explicitly disabled via build config 00:37:36.252 mldev: explicitly disabled via build config 00:37:36.252 rib: explicitly disabled via build config 00:37:36.252 sched: explicitly disabled via build config 00:37:36.252 stack: explicitly disabled via build config 00:37:36.252 ipsec: explicitly disabled via build config 00:37:36.252 pdcp: explicitly disabled via build config 00:37:36.252 fib: explicitly disabled via build config 00:37:36.252 port: explicitly disabled via build config 00:37:36.252 pdump: explicitly disabled via build config 00:37:36.252 table: explicitly disabled via build config 00:37:36.252 pipeline: explicitly disabled via build config 00:37:36.252 graph: explicitly disabled via build config 00:37:36.252 node: explicitly disabled via build config 00:37:36.252 00:37:36.252 drivers: 00:37:36.252 common/cpt: not in enabled drivers build config 00:37:36.252 common/dpaax: not in enabled drivers build config 00:37:36.252 common/iavf: not in enabled drivers build config 00:37:36.252 common/idpf: not in enabled drivers build config 00:37:36.252 common/mvep: not in enabled drivers build config 00:37:36.252 common/octeontx: not in enabled drivers build config 00:37:36.252 bus/auxiliary: not in enabled drivers build config 00:37:36.252 bus/cdx: not in enabled drivers build config 00:37:36.252 bus/dpaa: not in enabled drivers build config 00:37:36.252 bus/fslmc: not in enabled drivers build config 00:37:36.252 bus/ifpga: not in enabled drivers build config 00:37:36.252 bus/platform: not in enabled drivers build config 00:37:36.252 bus/vmbus: not in enabled drivers build config 00:37:36.252 common/cnxk: not in enabled drivers build config 00:37:36.252 common/mlx5: not in enabled drivers build config 00:37:36.252 common/nfp: not in enabled drivers build config 00:37:36.252 common/qat: not in enabled drivers build config 00:37:36.252 common/sfc_efx: not in enabled drivers build config 00:37:36.252 mempool/bucket: not in enabled drivers build config 00:37:36.252 mempool/cnxk: not in enabled drivers build config 00:37:36.252 mempool/dpaa: not in enabled drivers build config 00:37:36.252 mempool/dpaa2: not in enabled drivers build config 00:37:36.252 mempool/octeontx: not in enabled drivers build config 00:37:36.252 mempool/stack: not in enabled drivers build config 00:37:36.252 dma/cnxk: not in enabled drivers build config 00:37:36.252 dma/dpaa: not in enabled drivers build config 00:37:36.252 dma/dpaa2: not in enabled drivers build config 00:37:36.252 dma/hisilicon: not in enabled drivers build config 00:37:36.252 dma/idxd: not in enabled drivers build config 00:37:36.252 dma/ioat: not in enabled drivers build config 00:37:36.252 dma/skeleton: not in enabled drivers build config 00:37:36.252 net/af_packet: not in enabled drivers build config 00:37:36.252 net/af_xdp: not in enabled drivers build config 00:37:36.252 net/ark: not in enabled drivers build config 00:37:36.252 net/atlantic: not in enabled drivers build config 00:37:36.252 net/avp: not in enabled drivers build config 00:37:36.252 net/axgbe: not in enabled drivers build config 00:37:36.252 net/bnx2x: not in enabled drivers build config 00:37:36.252 net/bnxt: not in enabled drivers build config 00:37:36.252 net/bonding: not in enabled drivers build config 00:37:36.252 net/cnxk: not in enabled drivers build config 00:37:36.252 net/cpfl: not in enabled drivers build config 00:37:36.252 net/cxgbe: not in enabled drivers build config 00:37:36.252 net/dpaa: not in enabled drivers build config 00:37:36.252 net/dpaa2: not in enabled drivers build config 00:37:36.252 net/e1000: not in enabled drivers build config 00:37:36.253 net/ena: not in enabled drivers build config 00:37:36.253 net/enetc: not in enabled drivers build config 00:37:36.253 net/enetfec: not in enabled drivers build config 00:37:36.253 net/enic: not in enabled drivers build config 00:37:36.253 net/failsafe: not in enabled drivers build config 00:37:36.253 net/fm10k: not in enabled drivers build config 00:37:36.253 net/gve: not in enabled drivers build config 00:37:36.253 net/hinic: not in enabled drivers build config 00:37:36.253 net/hns3: not in enabled drivers build config 00:37:36.253 net/i40e: not in enabled drivers build config 00:37:36.253 net/iavf: not in enabled drivers build config 00:37:36.253 net/ice: not in enabled drivers build config 00:37:36.253 net/idpf: not in enabled drivers build config 00:37:36.253 net/igc: not in enabled drivers build config 00:37:36.253 net/ionic: not in enabled drivers build config 00:37:36.253 net/ipn3ke: not in enabled drivers build config 00:37:36.253 net/ixgbe: not in enabled drivers build config 00:37:36.253 net/mana: not in enabled drivers build config 00:37:36.253 net/memif: not in enabled drivers build config 00:37:36.253 net/mlx4: not in enabled drivers build config 00:37:36.253 net/mlx5: not in enabled drivers build config 00:37:36.253 net/mvneta: not in enabled drivers build config 00:37:36.253 net/mvpp2: not in enabled drivers build config 00:37:36.253 net/netvsc: not in enabled drivers build config 00:37:36.253 net/nfb: not in enabled drivers build config 00:37:36.253 net/nfp: not in enabled drivers build config 00:37:36.253 net/ngbe: not in enabled drivers build config 00:37:36.253 net/null: not in enabled drivers build config 00:37:36.253 net/octeontx: not in enabled drivers build config 00:37:36.253 net/octeon_ep: not in enabled drivers build config 00:37:36.253 net/pcap: not in enabled drivers build config 00:37:36.253 net/pfe: not in enabled drivers build config 00:37:36.253 net/qede: not in enabled drivers build config 00:37:36.253 net/ring: not in enabled drivers build config 00:37:36.253 net/sfc: not in enabled drivers build config 00:37:36.253 net/softnic: not in enabled drivers build config 00:37:36.253 net/tap: not in enabled drivers build config 00:37:36.253 net/thunderx: not in enabled drivers build config 00:37:36.253 net/txgbe: not in enabled drivers build config 00:37:36.253 net/vdev_netvsc: not in enabled drivers build config 00:37:36.253 net/vhost: not in enabled drivers build config 00:37:36.253 net/virtio: not in enabled drivers build config 00:37:36.253 net/vmxnet3: not in enabled drivers build config 00:37:36.253 raw/*: missing internal dependency, "rawdev" 00:37:36.253 crypto/armv8: not in enabled drivers build config 00:37:36.253 crypto/bcmfs: not in enabled drivers build config 00:37:36.253 crypto/caam_jr: not in enabled drivers build config 00:37:36.253 crypto/ccp: not in enabled drivers build config 00:37:36.253 crypto/cnxk: not in enabled drivers build config 00:37:36.253 crypto/dpaa_sec: not in enabled drivers build config 00:37:36.253 crypto/dpaa2_sec: not in enabled drivers build config 00:37:36.253 crypto/ipsec_mb: not in enabled drivers build config 00:37:36.253 crypto/mlx5: not in enabled drivers build config 00:37:36.253 crypto/mvsam: not in enabled drivers build config 00:37:36.253 crypto/nitrox: not in enabled drivers build config 00:37:36.253 crypto/null: not in enabled drivers build config 00:37:36.253 crypto/octeontx: not in enabled drivers build config 00:37:36.253 crypto/openssl: not in enabled drivers build config 00:37:36.253 crypto/scheduler: not in enabled drivers build config 00:37:36.253 crypto/uadk: not in enabled drivers build config 00:37:36.253 crypto/virtio: not in enabled drivers build config 00:37:36.253 compress/isal: not in enabled drivers build config 00:37:36.253 compress/mlx5: not in enabled drivers build config 00:37:36.253 compress/octeontx: not in enabled drivers build config 00:37:36.253 compress/zlib: not in enabled drivers build config 00:37:36.253 regex/*: missing internal dependency, "regexdev" 00:37:36.253 ml/*: missing internal dependency, "mldev" 00:37:36.253 vdpa/ifc: not in enabled drivers build config 00:37:36.253 vdpa/mlx5: not in enabled drivers build config 00:37:36.253 vdpa/nfp: not in enabled drivers build config 00:37:36.253 vdpa/sfc: not in enabled drivers build config 00:37:36.253 event/*: missing internal dependency, "eventdev" 00:37:36.253 baseband/*: missing internal dependency, "bbdev" 00:37:36.253 gpu/*: missing internal dependency, "gpudev" 00:37:36.253 00:37:36.253 00:37:36.253 Build targets in project: 85 00:37:36.253 00:37:36.253 DPDK 23.11.0 00:37:36.253 00:37:36.253 User defined options 00:37:36.253 default_library : static 00:37:36.253 libdir : lib 00:37:36.253 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:37:36.253 b_lto : true 00:37:36.253 b_sanitize : address 00:37:36.253 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:37:36.253 c_link_args : 00:37:36.253 cpu_instruction_set: native 00:37:36.253 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:37:36.253 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:37:36.253 enable_docs : false 00:37:36.253 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:37:36.253 enable_kmods : false 00:37:36.253 tests : false 00:37:36.253 00:37:36.253 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:37:36.821 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:37:36.821 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:37:36.821 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:37:36.821 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:37:36.821 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:37:36.821 [5/265] Linking static target lib/librte_kvargs.a 00:37:36.821 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:37:36.821 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:37:37.080 [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:37:37.080 [9/265] Linking static target lib/librte_log.a 00:37:37.080 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:37:37.080 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:37:37.080 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:37:37.080 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:37:37.080 [14/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:37:37.339 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:37:37.339 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:37:37.339 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:37:37.339 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:37:37.339 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:37:37.598 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:37:37.598 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:37:37.598 [22/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:37:37.598 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:37:37.598 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:37:37.857 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:37:37.857 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:37:37.857 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:37:37.857 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:37:37.857 [29/265] Linking target lib/librte_log.so.24.0 00:37:37.857 [30/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:37:37.857 [31/265] Linking static target lib/librte_telemetry.a 00:37:37.857 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:37:37.857 [33/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:37:37.857 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:37:37.857 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:37:38.116 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:37:38.116 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:37:38.116 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:37:38.116 [39/265] Linking target lib/librte_kvargs.so.24.0 00:37:38.116 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:37:38.117 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:37:38.117 [42/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:37:38.375 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:37:38.375 [44/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:37:38.375 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:37:38.375 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:37:38.375 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:37:38.375 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:37:38.635 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:37:38.635 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:37:38.635 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:37:38.635 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:37:38.635 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:37:38.635 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:37:38.635 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:37:38.894 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:37:38.894 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:37:38.894 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:37:38.894 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:37:38.894 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:37:38.894 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:37:38.894 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:37:38.894 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:37:38.895 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:37:38.895 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:37:39.154 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:37:39.154 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:37:39.154 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:37:39.154 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:37:39.154 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:37:39.154 [71/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:37:39.154 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:37:39.414 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:37:39.414 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:37:39.414 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:37:39.414 [76/265] Linking target lib/librte_telemetry.so.24.0 00:37:39.414 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:37:39.673 [78/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:37:39.673 [79/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:37:39.673 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:37:39.673 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:37:39.673 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:37:39.673 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:37:39.673 [84/265] Linking static target lib/librte_ring.a 00:37:39.673 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:37:39.932 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:37:39.932 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:37:39.932 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:37:39.932 [89/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:37:39.932 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:37:39.932 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:37:40.191 [92/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:37:40.191 [93/265] Linking static target lib/librte_eal.a 00:37:40.191 [94/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:37:40.191 [95/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:37:40.191 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:37:40.450 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:37:40.450 [98/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:37:40.450 [99/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:37:40.450 [100/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:37:40.450 [101/265] Linking static target lib/librte_mempool.a 00:37:40.450 [102/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:37:40.450 [103/265] Linking static target lib/librte_rcu.a 00:37:40.708 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:37:40.708 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:37:40.708 [106/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:37:40.708 [107/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:37:40.708 [108/265] Linking static target lib/librte_net.a 00:37:40.708 [109/265] Linking static target lib/librte_meter.a 00:37:40.708 [110/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:37:40.967 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:37:40.967 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:37:40.967 [113/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:37:40.967 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:37:40.967 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:37:40.967 [116/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:37:41.225 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:37:41.484 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:37:41.484 [119/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:37:41.484 [120/265] Linking static target lib/librte_mbuf.a 00:37:41.743 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:37:41.743 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:37:41.743 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:37:41.743 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:37:42.029 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:37:42.029 [126/265] Linking static target lib/librte_pci.a 00:37:42.029 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:37:42.029 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:37:42.029 [129/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:37:42.029 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:37:42.287 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:37:42.287 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:37:42.287 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:37:42.287 [134/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:37:42.287 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:37:42.287 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:37:42.287 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:37:42.287 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:37:42.287 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:37:42.287 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:37:42.287 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:37:42.545 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:37:42.545 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:37:42.545 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:37:42.545 [145/265] Linking static target lib/librte_cmdline.a 00:37:42.802 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:37:42.802 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:37:43.060 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:37:43.060 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:37:43.060 [150/265] Linking static target lib/librte_timer.a 00:37:43.060 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:37:43.060 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:37:43.319 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:37:43.319 [154/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:37:43.319 [155/265] Linking static target lib/librte_compressdev.a 00:37:43.319 [156/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:37:43.577 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:37:43.578 [158/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:37:43.578 [159/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:37:43.578 [160/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:37:43.578 [161/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:37:43.836 [162/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:37:43.836 [163/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:37:43.836 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:37:43.836 [165/265] Linking static target lib/librte_dmadev.a 00:37:44.094 [166/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:37:44.095 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:37:44.095 [168/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:37:44.353 [169/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:37:44.353 [170/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:37:44.353 [171/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:37:44.353 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:37:44.353 [173/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:37:44.610 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:37:44.610 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:37:44.610 [176/265] Linking static target lib/librte_power.a 00:37:44.868 [177/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:37:44.868 [178/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:37:44.868 [179/265] Linking static target lib/librte_reorder.a 00:37:44.868 [180/265] Linking static target lib/librte_security.a 00:37:44.868 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:37:44.868 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:37:45.126 [183/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:37:45.126 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:37:45.126 [185/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:37:45.385 [186/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:37:45.385 [187/265] Linking static target lib/librte_cryptodev.a 00:37:45.385 [188/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:37:45.644 [189/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:37:45.644 [190/265] Linking static target lib/librte_ethdev.a 00:37:45.902 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:37:45.902 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:37:46.160 [193/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:37:46.418 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:37:46.418 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:37:46.418 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:37:46.676 [197/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:37:46.676 [198/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:37:46.676 [199/265] Linking static target lib/librte_hash.a 00:37:46.934 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:37:46.934 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:37:46.934 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:37:46.934 [203/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:37:47.190 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:37:47.190 [205/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:37:47.190 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:37:47.190 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:37:47.190 [208/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:37:47.448 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:37:47.448 [210/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:37:47.448 [211/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:37:47.448 [212/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:37:47.448 [213/265] Linking static target drivers/librte_bus_pci.a 00:37:47.448 [214/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:37:47.448 [215/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:37:47.448 [216/265] Linking static target drivers/librte_bus_vdev.a 00:37:47.706 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:37:47.706 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:37:47.706 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:37:47.706 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:37:47.706 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:37:47.706 [222/265] Linking static target drivers/librte_mempool_ring.a 00:37:47.706 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:37:47.964 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:37:48.896 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:37:55.452 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:37:56.827 [227/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:37:58.726 [228/265] Linking target lib/librte_eal.so.24.0 00:37:58.726 lto-wrapper: warning: using serial compilation of 5 LTRANS jobs 00:37:58.726 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:37:58.726 [230/265] Linking target lib/librte_pci.so.24.0 00:37:58.726 [231/265] Linking target lib/librte_meter.so.24.0 00:37:58.726 [232/265] Linking target lib/librte_ring.so.24.0 00:37:58.726 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:37:58.726 [234/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:37:58.726 [235/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:37:58.726 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:37:58.984 [237/265] Linking target lib/librte_timer.so.24.0 00:37:58.984 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:37:59.243 [239/265] Linking target lib/librte_dmadev.so.24.0 00:37:59.243 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:37:59.810 [241/265] Linking target lib/librte_rcu.so.24.0 00:37:59.810 [242/265] Linking target lib/librte_mempool.so.24.0 00:37:59.810 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:37:59.810 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:38:00.068 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:38:00.326 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:38:01.755 [247/265] Linking target lib/librte_mbuf.so.24.0 00:38:01.756 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:38:02.323 [249/265] Linking target lib/librte_reorder.so.24.0 00:38:02.323 [250/265] Linking target lib/librte_compressdev.so.24.0 00:38:02.888 [251/265] Linking target lib/librte_net.so.24.0 00:38:02.888 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:38:04.262 [253/265] Linking target lib/librte_cmdline.so.24.0 00:38:04.262 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:38:04.262 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:38:04.828 [256/265] Linking target lib/librte_security.so.24.0 00:38:07.448 [257/265] Linking target lib/librte_hash.so.24.0 00:38:07.448 [258/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:38:14.005 [259/265] Linking target lib/librte_ethdev.so.24.0 00:38:14.005 lto-wrapper: warning: using serial compilation of 6 LTRANS jobs 00:38:14.005 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:38:15.933 [261/265] Linking target lib/librte_power.so.24.0 00:38:24.114 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:38:24.114 [263/265] Linking static target lib/librte_vhost.a 00:38:25.485 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:39:21.711 [265/265] Linking target lib/librte_vhost.so.24.0 00:39:21.711 lto-wrapper: warning: using serial compilation of 8 LTRANS jobs 00:39:21.711 INFO: autodetecting backend as ninja 00:39:21.711 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:39:21.711 CC lib/log/log.o 00:39:21.711 CC lib/log/log_flags.o 00:39:21.711 CC lib/ut/ut.o 00:39:21.711 CC lib/log/log_deprecated.o 00:39:21.711 CC lib/ut_mock/mock.o 00:39:21.711 LIB libspdk_log.a 00:39:21.711 LIB libspdk_ut_mock.a 00:39:21.711 LIB libspdk_ut.a 00:39:21.711 CC lib/dma/dma.o 00:39:21.711 CC lib/ioat/ioat.o 00:39:21.711 CXX lib/trace_parser/trace.o 00:39:21.711 CC lib/util/base64.o 00:39:21.711 CC lib/util/bit_array.o 00:39:21.711 CC lib/util/crc16.o 00:39:21.711 CC lib/util/cpuset.o 00:39:21.711 CC lib/util/crc32.o 00:39:21.711 CC lib/util/crc32c.o 00:39:21.711 CC lib/vfio_user/host/vfio_user_pci.o 00:39:21.711 LIB libspdk_dma.a 00:39:21.711 CC lib/vfio_user/host/vfio_user.o 00:39:21.711 CC lib/util/crc32_ieee.o 00:39:21.711 CC lib/util/crc64.o 00:39:21.711 CC lib/util/dif.o 00:39:21.711 CC lib/util/fd.o 00:39:21.711 CC lib/util/file.o 00:39:21.711 CC lib/util/hexlify.o 00:39:21.711 LIB libspdk_ioat.a 00:39:21.711 CC lib/util/iov.o 00:39:21.711 CC lib/util/math.o 00:39:21.711 CC lib/util/pipe.o 00:39:21.711 CC lib/util/strerror_tls.o 00:39:21.711 LIB libspdk_vfio_user.a 00:39:21.712 CC lib/util/string.o 00:39:21.712 CC lib/util/uuid.o 00:39:21.712 CC lib/util/fd_group.o 00:39:21.712 CC lib/util/xor.o 00:39:21.712 CC lib/util/zipf.o 00:39:21.712 LIB libspdk_util.a 00:39:21.712 LIB libspdk_trace_parser.a 00:39:21.712 CC lib/rdma/rdma_verbs.o 00:39:21.712 CC lib/rdma/common.o 00:39:21.712 CC lib/env_dpdk/env.o 00:39:21.712 CC lib/env_dpdk/memory.o 00:39:21.712 CC lib/env_dpdk/pci.o 00:39:21.712 CC lib/env_dpdk/init.o 00:39:21.712 CC lib/idxd/idxd.o 00:39:21.712 CC lib/vmd/vmd.o 00:39:21.712 CC lib/json/json_parse.o 00:39:21.712 CC lib/conf/conf.o 00:39:21.712 CC lib/idxd/idxd_user.o 00:39:21.712 CC lib/json/json_util.o 00:39:21.712 LIB libspdk_rdma.a 00:39:21.712 CC lib/json/json_write.o 00:39:21.712 CC lib/vmd/led.o 00:39:21.712 LIB libspdk_conf.a 00:39:21.712 CC lib/env_dpdk/threads.o 00:39:21.712 CC lib/env_dpdk/pci_ioat.o 00:39:21.712 CC lib/env_dpdk/pci_virtio.o 00:39:21.712 CC lib/env_dpdk/pci_vmd.o 00:39:21.712 LIB libspdk_idxd.a 00:39:21.712 CC lib/env_dpdk/pci_idxd.o 00:39:21.712 LIB libspdk_vmd.a 00:39:21.712 CC lib/env_dpdk/pci_event.o 00:39:21.712 CC lib/env_dpdk/sigbus_handler.o 00:39:21.712 CC lib/env_dpdk/pci_dpdk.o 00:39:21.712 CC lib/env_dpdk/pci_dpdk_2207.o 00:39:21.712 CC lib/env_dpdk/pci_dpdk_2211.o 00:39:21.712 LIB libspdk_json.a 00:39:21.712 CC lib/jsonrpc/jsonrpc_client.o 00:39:21.712 CC lib/jsonrpc/jsonrpc_server.o 00:39:21.712 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:39:21.712 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:39:21.712 LIB libspdk_jsonrpc.a 00:39:21.712 LIB libspdk_env_dpdk.a 00:39:21.712 CC lib/rpc/rpc.o 00:39:21.712 LIB libspdk_rpc.a 00:39:21.712 CC lib/sock/sock.o 00:39:21.712 CC lib/sock/sock_rpc.o 00:39:21.712 CC lib/notify/notify.o 00:39:21.712 CC lib/trace/trace.o 00:39:21.712 CC lib/trace/trace_flags.o 00:39:21.712 CC lib/notify/notify_rpc.o 00:39:21.712 CC lib/trace/trace_rpc.o 00:39:21.712 LIB libspdk_notify.a 00:39:21.712 LIB libspdk_trace.a 00:39:21.712 LIB libspdk_sock.a 00:39:21.712 CC lib/thread/thread.o 00:39:21.712 CC lib/thread/iobuf.o 00:39:21.712 CC lib/nvme/nvme_ctrlr_cmd.o 00:39:21.712 CC lib/nvme/nvme_ctrlr.o 00:39:21.712 CC lib/nvme/nvme_fabric.o 00:39:21.712 CC lib/nvme/nvme_ns_cmd.o 00:39:21.712 CC lib/nvme/nvme_pcie_common.o 00:39:21.712 CC lib/nvme/nvme_qpair.o 00:39:21.712 CC lib/nvme/nvme_pcie.o 00:39:21.712 CC lib/nvme/nvme_ns.o 00:39:21.712 CC lib/nvme/nvme.o 00:39:21.712 LIB libspdk_thread.a 00:39:21.712 CC lib/nvme/nvme_quirks.o 00:39:21.712 CC lib/nvme/nvme_transport.o 00:39:21.712 CC lib/nvme/nvme_discovery.o 00:39:21.712 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:39:21.712 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:39:21.712 CC lib/nvme/nvme_tcp.o 00:39:21.712 CC lib/nvme/nvme_opal.o 00:39:21.712 CC lib/nvme/nvme_io_msg.o 00:39:21.712 CC lib/accel/accel.o 00:39:21.712 CC lib/nvme/nvme_poll_group.o 00:39:21.712 CC lib/accel/accel_rpc.o 00:39:21.712 CC lib/blob/blobstore.o 00:39:21.712 CC lib/init/json_config.o 00:39:21.712 CC lib/init/subsystem.o 00:39:21.712 CC lib/virtio/virtio.o 00:39:21.712 CC lib/virtio/virtio_vhost_user.o 00:39:21.712 CC lib/virtio/virtio_vfio_user.o 00:39:21.712 CC lib/accel/accel_sw.o 00:39:21.712 CC lib/virtio/virtio_pci.o 00:39:21.712 CC lib/nvme/nvme_zns.o 00:39:21.712 CC lib/init/subsystem_rpc.o 00:39:21.712 CC lib/init/rpc.o 00:39:21.712 CC lib/nvme/nvme_cuse.o 00:39:21.712 CC lib/nvme/nvme_vfio_user.o 00:39:21.712 CC lib/nvme/nvme_rdma.o 00:39:21.712 LIB libspdk_accel.a 00:39:21.712 LIB libspdk_virtio.a 00:39:21.712 CC lib/blob/request.o 00:39:21.712 CC lib/blob/zeroes.o 00:39:21.712 LIB libspdk_init.a 00:39:21.712 CC lib/blob/blob_bs_dev.o 00:39:21.712 CC lib/bdev/bdev.o 00:39:21.712 CC lib/bdev/bdev_rpc.o 00:39:21.712 CC lib/event/app.o 00:39:21.712 CC lib/event/reactor.o 00:39:21.712 CC lib/event/log_rpc.o 00:39:21.712 CC lib/event/app_rpc.o 00:39:21.712 CC lib/bdev/bdev_zone.o 00:39:21.712 CC lib/event/scheduler_static.o 00:39:21.712 CC lib/bdev/part.o 00:39:21.712 CC lib/bdev/scsi_nvme.o 00:39:21.712 LIB libspdk_event.a 00:39:21.712 LIB libspdk_nvme.a 00:39:21.712 LIB libspdk_blob.a 00:39:21.712 CC lib/lvol/lvol.o 00:39:21.712 CC lib/blobfs/blobfs.o 00:39:21.712 CC lib/blobfs/tree.o 00:39:21.712 LIB libspdk_bdev.a 00:39:21.712 CC lib/nvmf/ctrlr.o 00:39:21.712 CC lib/nvmf/subsystem.o 00:39:21.712 CC lib/nvmf/ctrlr_discovery.o 00:39:21.712 CC lib/nvmf/nvmf.o 00:39:21.712 CC lib/nvmf/ctrlr_bdev.o 00:39:21.712 CC lib/nbd/nbd.o 00:39:21.712 CC lib/scsi/dev.o 00:39:21.712 CC lib/ftl/ftl_core.o 00:39:21.712 LIB libspdk_blobfs.a 00:39:21.712 LIB libspdk_lvol.a 00:39:21.971 CC lib/ftl/ftl_init.o 00:39:21.971 CC lib/ftl/ftl_layout.o 00:39:21.971 CC lib/scsi/lun.o 00:39:21.971 CC lib/nvmf/nvmf_rpc.o 00:39:21.971 CC lib/nbd/nbd_rpc.o 00:39:21.971 CC lib/nvmf/transport.o 00:39:21.971 CC lib/ftl/ftl_debug.o 00:39:21.971 CC lib/scsi/port.o 00:39:21.971 CC lib/scsi/scsi.o 00:39:21.971 CC lib/ftl/ftl_io.o 00:39:22.229 CC lib/ftl/ftl_sb.o 00:39:22.229 LIB libspdk_nbd.a 00:39:22.229 CC lib/nvmf/tcp.o 00:39:22.229 CC lib/nvmf/rdma.o 00:39:22.229 CC lib/scsi/scsi_bdev.o 00:39:22.229 CC lib/scsi/scsi_pr.o 00:39:22.229 CC lib/scsi/scsi_rpc.o 00:39:22.229 CC lib/scsi/task.o 00:39:22.229 CC lib/ftl/ftl_l2p.o 00:39:22.229 CC lib/ftl/ftl_l2p_flat.o 00:39:22.229 CC lib/ftl/ftl_nv_cache.o 00:39:22.488 CC lib/ftl/ftl_band.o 00:39:22.488 CC lib/ftl/ftl_band_ops.o 00:39:22.488 CC lib/ftl/ftl_writer.o 00:39:22.488 CC lib/ftl/ftl_rq.o 00:39:22.488 CC lib/ftl/ftl_reloc.o 00:39:22.488 CC lib/ftl/ftl_l2p_cache.o 00:39:22.488 LIB libspdk_scsi.a 00:39:22.488 CC lib/ftl/ftl_p2l.o 00:39:22.488 CC lib/ftl/mngt/ftl_mngt.o 00:39:22.488 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:39:22.488 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:39:22.746 CC lib/ftl/mngt/ftl_mngt_startup.o 00:39:22.746 CC lib/ftl/mngt/ftl_mngt_md.o 00:39:22.746 CC lib/ftl/mngt/ftl_mngt_misc.o 00:39:22.746 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:39:22.746 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:39:22.746 CC lib/ftl/mngt/ftl_mngt_band.o 00:39:22.746 CC lib/iscsi/conn.o 00:39:22.746 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:39:22.746 CC lib/iscsi/init_grp.o 00:39:22.746 CC lib/vhost/vhost.o 00:39:22.746 CC lib/iscsi/iscsi.o 00:39:23.029 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:39:23.029 CC lib/iscsi/md5.o 00:39:23.029 LIB libspdk_nvmf.a 00:39:23.029 CC lib/iscsi/param.o 00:39:23.029 CC lib/iscsi/portal_grp.o 00:39:23.029 CC lib/iscsi/tgt_node.o 00:39:23.029 CC lib/iscsi/iscsi_subsystem.o 00:39:23.029 CC lib/iscsi/iscsi_rpc.o 00:39:23.029 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:39:23.029 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:39:23.029 CC lib/iscsi/task.o 00:39:23.287 CC lib/vhost/vhost_rpc.o 00:39:23.287 CC lib/vhost/vhost_scsi.o 00:39:23.287 CC lib/vhost/vhost_blk.o 00:39:23.287 CC lib/vhost/rte_vhost_user.o 00:39:23.287 CC lib/ftl/utils/ftl_conf.o 00:39:23.287 CC lib/ftl/utils/ftl_md.o 00:39:23.287 CC lib/ftl/utils/ftl_mempool.o 00:39:23.287 CC lib/ftl/utils/ftl_bitmap.o 00:39:23.287 CC lib/ftl/utils/ftl_property.o 00:39:23.287 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:39:23.287 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:39:23.545 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:39:23.545 LIB libspdk_iscsi.a 00:39:23.545 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:39:23.545 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:39:23.545 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:39:23.545 CC lib/ftl/upgrade/ftl_sb_v3.o 00:39:23.545 CC lib/ftl/upgrade/ftl_sb_v5.o 00:39:23.545 CC lib/ftl/nvc/ftl_nvc_dev.o 00:39:23.545 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:39:23.545 CC lib/ftl/base/ftl_base_dev.o 00:39:23.804 CC lib/ftl/base/ftl_base_bdev.o 00:39:23.804 LIB libspdk_ftl.a 00:39:24.063 LIB libspdk_vhost.a 00:39:24.321 CC module/env_dpdk/env_dpdk_rpc.o 00:39:24.321 CC module/accel/ioat/accel_ioat.o 00:39:24.321 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:39:24.321 CC module/sock/posix/posix.o 00:39:24.321 CC module/blob/bdev/blob_bdev.o 00:39:24.321 CC module/scheduler/dynamic/scheduler_dynamic.o 00:39:24.321 CC module/accel/error/accel_error.o 00:39:24.321 CC module/scheduler/gscheduler/gscheduler.o 00:39:24.321 CC module/accel/dsa/accel_dsa.o 00:39:24.321 CC module/accel/iaa/accel_iaa.o 00:39:24.321 LIB libspdk_env_dpdk_rpc.a 00:39:24.321 CC module/accel/iaa/accel_iaa_rpc.o 00:39:24.321 CC module/accel/ioat/accel_ioat_rpc.o 00:39:24.321 LIB libspdk_scheduler_dpdk_governor.a 00:39:24.321 LIB libspdk_scheduler_dynamic.a 00:39:24.321 LIB libspdk_blob_bdev.a 00:39:24.321 CC module/accel/error/accel_error_rpc.o 00:39:24.321 LIB libspdk_scheduler_gscheduler.a 00:39:24.321 CC module/accel/dsa/accel_dsa_rpc.o 00:39:24.321 LIB libspdk_accel_iaa.a 00:39:24.580 LIB libspdk_accel_ioat.a 00:39:24.580 LIB libspdk_accel_dsa.a 00:39:24.580 LIB libspdk_accel_error.a 00:39:24.580 CC module/bdev/malloc/bdev_malloc.o 00:39:24.580 CC module/bdev/delay/vbdev_delay.o 00:39:24.580 CC module/bdev/gpt/gpt.o 00:39:24.580 CC module/bdev/lvol/vbdev_lvol.o 00:39:24.580 CC module/bdev/error/vbdev_error.o 00:39:24.580 CC module/bdev/error/vbdev_error_rpc.o 00:39:24.580 CC module/blobfs/bdev/blobfs_bdev.o 00:39:24.580 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:39:24.580 CC module/bdev/null/bdev_null.o 00:39:24.580 LIB libspdk_sock_posix.a 00:39:24.580 CC module/bdev/null/bdev_null_rpc.o 00:39:24.580 CC module/bdev/gpt/vbdev_gpt.o 00:39:24.839 LIB libspdk_blobfs_bdev.a 00:39:24.839 CC module/bdev/delay/vbdev_delay_rpc.o 00:39:24.839 CC module/bdev/malloc/bdev_malloc_rpc.o 00:39:24.839 CC module/bdev/nvme/bdev_nvme.o 00:39:24.839 CC module/bdev/nvme/bdev_nvme_rpc.o 00:39:24.839 LIB libspdk_bdev_error.a 00:39:24.839 CC module/bdev/passthru/vbdev_passthru.o 00:39:24.839 LIB libspdk_bdev_null.a 00:39:24.839 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:39:24.839 CC module/bdev/raid/bdev_raid.o 00:39:24.839 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:39:24.839 LIB libspdk_bdev_gpt.a 00:39:24.839 LIB libspdk_bdev_delay.a 00:39:24.839 LIB libspdk_bdev_malloc.a 00:39:24.839 CC module/bdev/nvme/nvme_rpc.o 00:39:24.839 CC module/bdev/nvme/bdev_mdns_client.o 00:39:24.839 CC module/bdev/split/vbdev_split.o 00:39:24.839 CC module/bdev/split/vbdev_split_rpc.o 00:39:24.839 CC module/bdev/zone_block/vbdev_zone_block.o 00:39:24.839 LIB libspdk_bdev_passthru.a 00:39:25.097 CC module/bdev/nvme/vbdev_opal.o 00:39:25.097 LIB libspdk_bdev_lvol.a 00:39:25.097 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:39:25.097 CC module/bdev/raid/bdev_raid_rpc.o 00:39:25.097 CC module/bdev/nvme/vbdev_opal_rpc.o 00:39:25.097 LIB libspdk_bdev_split.a 00:39:25.097 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:39:25.097 CC module/bdev/raid/bdev_raid_sb.o 00:39:25.097 CC module/bdev/raid/raid0.o 00:39:25.097 CC module/bdev/raid/raid1.o 00:39:25.097 CC module/bdev/raid/concat.o 00:39:25.097 LIB libspdk_bdev_zone_block.a 00:39:25.097 CC module/bdev/raid/raid5f.o 00:39:25.355 CC module/bdev/aio/bdev_aio.o 00:39:25.355 CC module/bdev/aio/bdev_aio_rpc.o 00:39:25.355 CC module/bdev/ftl/bdev_ftl.o 00:39:25.355 CC module/bdev/ftl/bdev_ftl_rpc.o 00:39:25.356 CC module/bdev/virtio/bdev_virtio_scsi.o 00:39:25.356 CC module/bdev/iscsi/bdev_iscsi.o 00:39:25.356 CC module/bdev/virtio/bdev_virtio_blk.o 00:39:25.356 CC module/bdev/virtio/bdev_virtio_rpc.o 00:39:25.356 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:39:25.356 LIB libspdk_bdev_aio.a 00:39:25.614 LIB libspdk_bdev_raid.a 00:39:25.614 LIB libspdk_bdev_ftl.a 00:39:25.614 LIB libspdk_bdev_iscsi.a 00:39:25.614 LIB libspdk_bdev_virtio.a 00:39:25.614 LIB libspdk_bdev_nvme.a 00:39:26.183 CC module/event/subsystems/scheduler/scheduler.o 00:39:26.183 CC module/event/subsystems/vmd/vmd_rpc.o 00:39:26.183 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:39:26.184 CC module/event/subsystems/vmd/vmd.o 00:39:26.184 CC module/event/subsystems/sock/sock.o 00:39:26.184 CC module/event/subsystems/iobuf/iobuf.o 00:39:26.184 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:39:26.184 LIB libspdk_event_sock.a 00:39:26.184 LIB libspdk_event_vhost_blk.a 00:39:26.184 LIB libspdk_event_scheduler.a 00:39:26.184 LIB libspdk_event_vmd.a 00:39:26.184 LIB libspdk_event_iobuf.a 00:39:26.443 CC module/event/subsystems/accel/accel.o 00:39:26.700 LIB libspdk_event_accel.a 00:39:26.958 CC module/event/subsystems/bdev/bdev.o 00:39:26.958 LIB libspdk_event_bdev.a 00:39:27.217 CC module/event/subsystems/scsi/scsi.o 00:39:27.217 CC module/event/subsystems/nbd/nbd.o 00:39:27.217 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:39:27.217 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:39:27.475 LIB libspdk_event_scsi.a 00:39:27.475 LIB libspdk_event_nbd.a 00:39:27.475 LIB libspdk_event_nvmf.a 00:39:27.475 CC module/event/subsystems/iscsi/iscsi.o 00:39:27.475 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:39:27.734 LIB libspdk_event_vhost_scsi.a 00:39:27.734 LIB libspdk_event_iscsi.a 00:39:27.993 CC app/trace_record/trace_record.o 00:39:27.993 CXX app/trace/trace.o 00:39:27.993 CC app/nvmf_tgt/nvmf_main.o 00:39:27.993 CC app/iscsi_tgt/iscsi_tgt.o 00:39:27.993 CC examples/ioat/perf/perf.o 00:39:27.993 CC examples/accel/perf/accel_perf.o 00:39:27.993 CC app/spdk_tgt/spdk_tgt.o 00:39:27.993 CC examples/bdev/hello_world/hello_bdev.o 00:39:27.993 CC examples/blob/hello_world/hello_blob.o 00:39:27.993 CC test/accel/dif/dif.o 00:39:27.993 LINK spdk_trace_record 00:39:27.993 LINK ioat_perf 00:39:27.993 LINK nvmf_tgt 00:39:28.252 LINK iscsi_tgt 00:39:28.252 LINK spdk_tgt 00:39:28.252 LINK hello_bdev 00:39:28.252 LINK hello_blob 00:39:28.252 LINK accel_perf 00:39:28.252 LINK spdk_trace 00:39:28.252 LINK dif 00:39:34.819 CC examples/bdev/bdevperf/bdevperf.o 00:39:36.751 LINK bdevperf 00:39:37.686 CC examples/nvme/hello_world/hello_world.o 00:39:38.625 LINK hello_world 00:39:41.156 CC examples/ioat/verify/verify.o 00:39:41.724 LINK verify 00:40:08.271 CC examples/nvme/reconnect/reconnect.o 00:40:08.271 LINK reconnect 00:40:23.180 CC examples/nvme/nvme_manage/nvme_manage.o 00:40:23.747 LINK nvme_manage 00:40:50.301 CC examples/blob/cli/blobcli.o 00:40:50.301 LINK blobcli 00:41:16.905 CC test/app/bdev_svc/bdev_svc.o 00:41:16.905 LINK bdev_svc 00:41:38.826 CC examples/nvme/arbitration/arbitration.o 00:41:38.826 LINK arbitration 00:41:51.035 CC examples/nvme/hotplug/hotplug.o 00:41:51.969 CC examples/nvme/cmb_copy/cmb_copy.o 00:41:52.227 LINK hotplug 00:41:52.795 LINK cmb_copy 00:41:56.982 CC examples/nvme/abort/abort.o 00:41:58.362 LINK abort 00:41:58.930 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:41:59.867 LINK pmr_persistence 00:42:17.951 CC test/bdev/bdevio/bdevio.o 00:42:18.887 CC test/blobfs/mkfs/mkfs.o 00:42:20.260 LINK mkfs 00:42:20.260 LINK bdevio 00:42:38.521 CC app/spdk_lspci/spdk_lspci.o 00:42:38.521 CC app/spdk_nvme_perf/perf.o 00:42:38.521 LINK spdk_lspci 00:42:38.521 LINK spdk_nvme_perf 00:42:43.794 CC app/spdk_nvme_identify/identify.o 00:42:45.171 CC examples/sock/hello_world/hello_sock.o 00:42:45.171 LINK spdk_nvme_identify 00:42:46.142 LINK hello_sock 00:42:52.734 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:42:54.639 LINK nvme_fuzz 00:42:57.921 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:43:03.249 LINK iscsi_fuzz 00:43:15.457 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:43:16.394 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:43:18.320 LINK vhost_fuzz 00:43:24.890 TEST_HEADER include/spdk/config.h 00:43:24.890 CXX test/cpp_headers/accel.o 00:43:25.827 CXX test/cpp_headers/accel_module.o 00:43:26.396 CXX test/cpp_headers/assert.o 00:43:27.332 CXX test/cpp_headers/barrier.o 00:43:28.268 CXX test/cpp_headers/base64.o 00:43:29.204 CXX test/cpp_headers/bdev.o 00:43:30.582 CXX test/cpp_headers/bdev_module.o 00:43:31.519 CXX test/cpp_headers/bdev_zone.o 00:43:32.454 CXX test/cpp_headers/bit_array.o 00:43:33.021 CC test/dma/test_dma/test_dma.o 00:43:33.280 CC examples/vmd/lsvmd/lsvmd.o 00:43:33.537 CXX test/cpp_headers/bit_pool.o 00:43:33.795 CXX test/cpp_headers/blob.o 00:43:34.054 LINK lsvmd 00:43:34.620 LINK test_dma 00:43:34.620 CXX test/cpp_headers/blob_bdev.o 00:43:35.557 CC test/env/mem_callbacks/mem_callbacks.o 00:43:35.816 CXX test/cpp_headers/blobfs.o 00:43:36.790 CXX test/cpp_headers/blobfs_bdev.o 00:43:37.357 CXX test/cpp_headers/conf.o 00:43:37.926 LINK mem_callbacks 00:43:38.185 CXX test/cpp_headers/config.o 00:43:38.444 CXX test/cpp_headers/cpuset.o 00:43:39.012 CXX test/cpp_headers/crc16.o 00:43:39.950 CXX test/cpp_headers/crc32.o 00:43:40.208 CC app/spdk_nvme_discover/discovery_aer.o 00:43:40.806 CXX test/cpp_headers/crc64.o 00:43:41.373 LINK spdk_nvme_discover 00:43:41.373 CXX test/cpp_headers/dif.o 00:43:42.311 CXX test/cpp_headers/dma.o 00:43:43.246 CXX test/cpp_headers/endian.o 00:43:44.189 CXX test/cpp_headers/env.o 00:43:44.759 CXX test/cpp_headers/env_dpdk.o 00:43:45.694 CXX test/cpp_headers/event.o 00:43:46.264 CXX test/cpp_headers/fd.o 00:43:47.198 CXX test/cpp_headers/fd_group.o 00:43:47.765 CXX test/cpp_headers/file.o 00:43:48.702 CXX test/cpp_headers/ftl.o 00:43:49.639 CXX test/cpp_headers/gpt_spec.o 00:43:49.639 CC app/spdk_top/spdk_top.o 00:43:50.577 CXX test/cpp_headers/hexlify.o 00:43:51.144 CXX test/cpp_headers/histogram_data.o 00:43:52.081 CC app/vhost/vhost.o 00:43:52.081 LINK spdk_top 00:43:52.082 CXX test/cpp_headers/idxd.o 00:43:52.648 LINK vhost 00:43:53.216 CXX test/cpp_headers/idxd_spec.o 00:43:53.784 CC test/app/histogram_perf/histogram_perf.o 00:43:54.042 CXX test/cpp_headers/init.o 00:43:54.610 LINK histogram_perf 00:43:55.178 CXX test/cpp_headers/ioat.o 00:43:56.171 CXX test/cpp_headers/ioat_spec.o 00:43:56.431 CXX test/cpp_headers/iscsi_spec.o 00:43:57.001 CXX test/cpp_headers/json.o 00:43:57.939 CXX test/cpp_headers/jsonrpc.o 00:43:58.875 CXX test/cpp_headers/likely.o 00:43:59.812 CXX test/cpp_headers/log.o 00:44:00.748 CXX test/cpp_headers/lvol.o 00:44:02.124 CXX test/cpp_headers/memory.o 00:44:03.057 CXX test/cpp_headers/mmio.o 00:44:04.431 CXX test/cpp_headers/nbd.o 00:44:04.431 CXX test/cpp_headers/notify.o 00:44:05.846 CXX test/cpp_headers/nvme.o 00:44:06.779 CXX test/cpp_headers/nvme_intel.o 00:44:08.156 CXX test/cpp_headers/nvme_ocssd.o 00:44:09.091 CXX test/cpp_headers/nvme_ocssd_spec.o 00:44:09.350 CC test/env/vtophys/vtophys.o 00:44:09.916 CXX test/cpp_headers/nvme_spec.o 00:44:10.174 LINK vtophys 00:44:11.110 CXX test/cpp_headers/nvme_zns.o 00:44:12.048 CXX test/cpp_headers/nvmf.o 00:44:12.986 CXX test/cpp_headers/nvmf_cmd.o 00:44:13.924 CXX test/cpp_headers/nvmf_fc_spec.o 00:44:14.183 CC examples/vmd/led/led.o 00:44:14.751 CXX test/cpp_headers/nvmf_spec.o 00:44:15.012 LINK led 00:44:15.271 CXX test/cpp_headers/nvmf_transport.o 00:44:16.208 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:44:16.208 CXX test/cpp_headers/opal.o 00:44:16.777 LINK env_dpdk_post_init 00:44:17.036 CXX test/cpp_headers/opal_spec.o 00:44:17.604 CXX test/cpp_headers/pci_ids.o 00:44:18.170 CXX test/cpp_headers/pipe.o 00:44:18.737 CXX test/cpp_headers/queue.o 00:44:18.737 CXX test/cpp_headers/reduce.o 00:44:19.305 CXX test/cpp_headers/rpc.o 00:44:20.238 CXX test/cpp_headers/scheduler.o 00:44:20.497 CXX test/cpp_headers/scsi.o 00:44:20.756 CC test/app/jsoncat/jsoncat.o 00:44:20.756 CC test/env/memory/memory_ut.o 00:44:21.016 LINK jsoncat 00:44:21.584 CXX test/cpp_headers/scsi_spec.o 00:44:21.842 CXX test/cpp_headers/sock.o 00:44:22.410 CXX test/cpp_headers/stdinc.o 00:44:22.669 CC test/event/event_perf/event_perf.o 00:44:22.926 CC test/app/stub/stub.o 00:44:23.183 CXX test/cpp_headers/string.o 00:44:23.441 LINK event_perf 00:44:23.700 LINK memory_ut 00:44:23.958 LINK stub 00:44:23.958 CXX test/cpp_headers/thread.o 00:44:24.893 CXX test/cpp_headers/trace.o 00:44:25.461 CXX test/cpp_headers/trace_parser.o 00:44:26.397 CXX test/cpp_headers/tree.o 00:44:26.397 CXX test/cpp_headers/ublk.o 00:44:27.338 CXX test/cpp_headers/util.o 00:44:28.274 CXX test/cpp_headers/uuid.o 00:44:29.210 CXX test/cpp_headers/version.o 00:44:29.210 CXX test/cpp_headers/vfio_user_pci.o 00:44:30.146 CXX test/cpp_headers/vfio_user_spec.o 00:44:31.108 CXX test/cpp_headers/vhost.o 00:44:31.368 CC test/env/pci/pci_ut.o 00:44:31.936 CC app/spdk_dd/spdk_dd.o 00:44:32.194 CXX test/cpp_headers/vmd.o 00:44:32.762 LINK pci_ut 00:44:33.021 CXX test/cpp_headers/xor.o 00:44:33.586 LINK spdk_dd 00:44:34.151 CXX test/cpp_headers/zipf.o 00:44:36.051 CC app/fio/nvme/fio_plugin.o 00:44:38.585 LINK spdk_nvme 00:44:53.513 CC test/lvol/esnap/esnap.o 00:44:53.513 CC examples/nvmf/nvmf/nvmf.o 00:44:54.451 CC examples/util/zipf/zipf.o 00:44:54.451 LINK nvmf 00:44:55.021 CC test/event/reactor/reactor.o 00:44:55.021 LINK zipf 00:44:55.590 LINK reactor 00:44:57.495 CC test/nvme/aer/aer.o 00:44:58.479 LINK aer 00:45:02.667 LINK esnap 00:45:14.928 CC test/rpc_client/rpc_client_test.o 00:45:15.188 LINK rpc_client_test 00:45:17.723 CC test/thread/poller_perf/poller_perf.o 00:45:18.662 LINK poller_perf 00:45:30.925 CC test/thread/lock/spdk_lock.o 00:45:32.831 CC test/event/reactor_perf/reactor_perf.o 00:45:33.091 LINK spdk_lock 00:45:34.029 LINK reactor_perf 00:45:37.316 CC app/fio/bdev/fio_plugin.o 00:45:38.692 LINK spdk_bdev 00:45:42.880 CC examples/thread/thread/thread_ex.o 00:45:44.258 LINK thread 00:45:50.828 CC test/nvme/reset/reset.o 00:45:50.828 LINK reset 00:45:57.395 CC test/nvme/sgl/sgl.o 00:45:58.334 LINK sgl 00:46:00.241 CC test/nvme/e2edp/nvme_dp.o 00:46:01.179 LINK nvme_dp 00:46:06.510 CC test/event/app_repeat/app_repeat.o 00:46:06.768 LINK app_repeat 00:46:07.709 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:46:08.647 LINK histogram_ut 00:46:15.214 CC test/unit/lib/accel/accel.c/accel_ut.o 00:46:21.793 LINK accel_ut 00:46:43.751 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:46:45.750 CC test/unit/lib/bdev/part.c/part_ut.o 00:46:49.938 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:46:50.207 LINK scsi_nvme_ut 00:46:52.147 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:46:52.406 LINK part_ut 00:46:53.343 LINK gpt_ut 00:46:53.343 LINK bdev_ut 00:46:54.801 CC test/event/scheduler/scheduler.o 00:46:55.368 CC test/nvme/overhead/overhead.o 00:46:55.625 LINK scheduler 00:46:56.568 LINK overhead 00:47:00.808 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:47:02.212 LINK blob_bdev_ut 00:47:02.780 CC test/unit/lib/blob/blob.c/blob_ut.o 00:47:04.158 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:47:05.095 LINK tree_ut 00:47:09.285 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:47:09.853 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:47:11.231 LINK blobfs_async_ut 00:47:12.171 LINK blobfs_sync_ut 00:47:14.707 LINK blob_ut 00:47:14.966 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:47:15.901 LINK blobfs_bdev_ut 00:47:21.165 CC examples/idxd/perf/perf.o 00:47:22.120 LINK idxd_perf 00:47:25.407 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:47:25.407 CC examples/interrupt_tgt/interrupt_tgt.o 00:47:25.665 LINK interrupt_tgt 00:47:28.198 LINK vbdev_lvol_ut 00:47:28.456 CC test/unit/lib/dma/dma.c/dma_ut.o 00:47:29.862 LINK dma_ut 00:47:34.091 CC test/nvme/err_injection/err_injection.o 00:47:35.028 LINK err_injection 00:47:35.028 CC test/unit/lib/event/app.c/app_ut.o 00:47:35.964 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:47:36.222 LINK app_ut 00:47:36.482 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:47:37.856 LINK reactor_ut 00:47:37.856 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:47:42.042 LINK bdev_raid_ut 00:47:42.978 LINK bdev_ut 00:47:43.236 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:47:43.496 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:47:43.496 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:47:44.434 LINK ioat_ut 00:47:44.693 LINK init_grp_ut 00:47:46.073 LINK conn_ut 00:47:46.073 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:47:48.613 CC test/unit/lib/iscsi/param.c/param_ut.o 00:47:49.985 LINK param_ut 00:47:50.244 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:47:51.178 LINK iscsi_ut 00:47:54.479 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:47:54.479 LINK json_parse_ut 00:47:55.855 LINK bdev_raid_sb_ut 00:47:56.793 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:47:57.730 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:47:58.298 LINK json_util_ut 00:47:59.234 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:48:00.170 LINK json_write_ut 00:48:00.429 LINK bdev_zone_ut 00:48:01.805 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:48:03.180 LINK concat_ut 00:48:06.467 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:48:06.467 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:48:07.402 LINK raid1_ut 00:48:07.971 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:48:08.232 LINK portal_grp_ut 00:48:10.174 LINK tgt_node_ut 00:48:10.174 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:48:10.742 LINK jsonrpc_server_ut 00:48:10.742 CC test/unit/lib/log/log.c/log_ut.o 00:48:11.679 LINK log_ut 00:48:12.616 CC test/nvme/startup/startup.o 00:48:13.186 LINK startup 00:48:15.099 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:48:15.099 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:48:15.666 CC test/nvme/reserve/reserve.o 00:48:15.925 CC test/nvme/simple_copy/simple_copy.o 00:48:16.197 LINK reserve 00:48:16.456 LINK raid5f_ut 00:48:16.719 LINK simple_copy 00:48:18.625 LINK lvol_ut 00:48:20.019 CC test/nvme/connect_stress/connect_stress.o 00:48:20.588 LINK connect_stress 00:48:25.847 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:48:25.847 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:48:26.414 CC test/nvme/boot_partition/boot_partition.o 00:48:27.352 LINK vbdev_zone_block_ut 00:48:27.352 LINK boot_partition 00:48:29.303 CC test/unit/lib/notify/notify.c/notify_ut.o 00:48:30.296 LINK notify_ut 00:48:34.486 LINK bdev_nvme_ut 00:48:34.486 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:48:37.038 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:48:37.973 CC test/nvme/compliance/nvme_compliance.o 00:48:38.540 LINK nvme_ut 00:48:39.476 LINK nvme_compliance 00:48:44.762 LINK nvme_ctrlr_ut 00:48:47.303 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:48:48.241 CC test/nvme/fused_ordering/fused_ordering.o 00:48:48.811 LINK fused_ordering 00:48:49.792 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:48:51.171 CC test/nvme/doorbell_aers/doorbell_aers.o 00:48:51.171 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:48:51.793 LINK doorbell_aers 00:48:51.793 LINK tcp_ut 00:48:52.362 LINK nvme_ctrlr_cmd_ut 00:48:52.622 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:48:52.881 LINK nvme_ctrlr_ocssd_cmd_ut 00:48:55.413 LINK nvme_ns_ut 00:48:56.787 CC test/nvme/fdp/fdp.o 00:48:57.354 CC test/nvme/cuse/cuse.o 00:48:57.921 LINK fdp 00:49:00.454 LINK cuse 00:49:00.713 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:49:01.651 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:49:03.558 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:49:04.127 LINK nvme_ns_cmd_ut 00:49:05.065 LINK nvme_ns_ocssd_cmd_ut 00:49:05.323 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:49:07.857 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:49:08.176 LINK nvme_pcie_ut 00:49:09.551 LINK ctrlr_ut 00:49:10.930 LINK subsystem_ut 00:49:11.190 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:49:11.190 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:49:13.100 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:49:14.037 LINK nvme_poll_group_ut 00:49:14.297 LINK ctrlr_bdev_ut 00:49:14.297 LINK ctrlr_discovery_ut 00:49:16.895 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:49:17.831 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:49:17.831 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:49:18.090 LINK nvmf_ut 00:49:18.090 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:49:19.025 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:49:19.284 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:49:19.284 LINK nvme_qpair_ut 00:49:19.544 LINK nvme_quirks_ut 00:49:19.802 LINK rdma_ut 00:49:20.061 LINK transport_ut 00:49:20.061 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:49:20.061 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:49:20.629 LINK dev_ut 00:49:20.937 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:49:21.196 LINK nvme_tcp_ut 00:49:21.196 LINK nvme_transport_ut 00:49:22.132 LINK lun_ut 00:49:22.132 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:49:22.700 LINK scsi_ut 00:49:23.655 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:49:24.248 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:49:24.248 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:49:24.508 LINK scsi_bdev_ut 00:49:24.768 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:49:24.768 LINK scsi_pr_ut 00:49:25.026 LINK nvme_io_msg_ut 00:49:25.595 CC test/unit/lib/sock/sock.c/sock_ut.o 00:49:25.595 LINK nvme_pcie_common_ut 00:49:25.595 CC test/unit/lib/sock/posix.c/posix_ut.o 00:49:25.853 CC test/unit/lib/thread/thread.c/thread_ut.o 00:49:26.112 LINK posix_ut 00:49:26.112 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:49:26.372 LINK sock_ut 00:49:26.372 LINK iobuf_ut 00:49:26.630 CC test/unit/lib/util/base64.c/base64_ut.o 00:49:26.630 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:49:26.630 LINK thread_ut 00:49:26.888 LINK base64_ut 00:49:27.147 LINK bit_array_ut 00:49:27.147 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:49:27.713 LINK pci_event_ut 00:49:27.713 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:49:28.280 LINK subsystem_ut 00:49:28.280 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:49:28.538 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:49:28.796 LINK cpuset_ut 00:49:28.796 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:49:29.054 LINK nvme_fabric_ut 00:49:29.054 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:49:29.054 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:49:29.054 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:49:29.054 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:49:29.312 LINK crc16_ut 00:49:29.312 LINK crc32_ieee_ut 00:49:29.312 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:49:29.312 LINK nvme_opal_ut 00:49:29.570 LINK rpc_ut 00:49:29.570 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:49:29.828 LINK nvme_cuse_ut 00:49:29.828 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:49:29.828 LINK crc32c_ut 00:49:29.828 CC test/unit/lib/util/dif.c/dif_ut.o 00:49:29.828 LINK crc64_ut 00:49:30.087 LINK nvme_rdma_ut 00:49:30.087 CC test/unit/lib/util/iov.c/iov_ut.o 00:49:30.087 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:49:30.408 CC test/unit/lib/util/math.c/math_ut.o 00:49:30.408 LINK iov_ut 00:49:30.408 LINK pipe_ut 00:49:30.667 LINK math_ut 00:49:30.667 LINK dif_ut 00:49:30.667 CC test/unit/lib/util/xor.c/xor_ut.o 00:49:30.667 CC test/unit/lib/util/string.c/string_ut.o 00:49:30.667 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:49:30.927 LINK xor_ut 00:49:30.927 LINK string_ut 00:49:30.927 LINK idxd_user_ut 00:49:31.494 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:49:32.061 LINK idxd_ut 00:49:32.321 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:49:32.321 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:49:32.321 CC test/unit/lib/rdma/common.c/common_ut.o 00:49:32.321 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:49:32.321 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:49:32.580 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:49:32.580 LINK ftl_l2p_ut 00:49:32.580 LINK common_ut 00:49:32.580 LINK ftl_bitmap_ut 00:49:32.838 LINK ftl_io_ut 00:49:32.838 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:49:32.838 LINK ftl_band_ut 00:49:33.097 LINK ftl_mempool_ut 00:49:33.097 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:49:33.097 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:49:33.097 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:49:33.097 LINK vhost_ut 00:49:33.356 LINK ftl_mngt_ut 00:49:33.923 LINK ftl_sb_ut 00:49:33.924 LINK ftl_layout_upgrade_ut 00:49:55.938 json_parse_ut.c: In function ‘test_parse_nesting’: 00:49:55.938 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:49:55.938 616 | test_parse_nesting(void) 00:49:55.938 | ^ 00:49:55.938 13:09:38 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:49:55.938 make[1]: Nothing to be done for 'clean'. 00:50:01.203 13:09:43 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:50:01.204 13:09:43 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:50:01.204 13:09:43 -- common/autotest_common.sh@10 -- $ set +x 00:50:01.204 13:09:43 -- spdk/autopackage.sh@48 -- $ timing_finish 00:50:01.204 13:09:43 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:50:01.204 13:09:43 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:50:01.204 13:09:43 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:50:01.204 + [[ -n 2108 ]] 00:50:01.204 + sudo kill 2108 00:50:01.212 [Pipeline] } 00:50:01.229 [Pipeline] // timeout 00:50:01.234 [Pipeline] } 00:50:01.249 [Pipeline] // stage 00:50:01.255 [Pipeline] } 00:50:01.270 [Pipeline] // catchError 00:50:01.281 [Pipeline] stage 00:50:01.284 [Pipeline] { (Stop VM) 00:50:01.298 [Pipeline] sh 00:50:01.579 + vagrant halt 00:50:04.909 ==> default: Halting domain... 00:50:14.898 [Pipeline] sh 00:50:15.179 + vagrant destroy -f 00:50:18.466 ==> default: Removing domain... 00:50:19.047 [Pipeline] sh 00:50:19.329 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:50:19.339 [Pipeline] } 00:50:19.355 [Pipeline] // stage 00:50:19.362 [Pipeline] } 00:50:19.378 [Pipeline] // dir 00:50:19.384 [Pipeline] } 00:50:19.401 [Pipeline] // wrap 00:50:19.408 [Pipeline] } 00:50:19.422 [Pipeline] // catchError 00:50:19.433 [Pipeline] stage 00:50:19.435 [Pipeline] { (Epilogue) 00:50:19.451 [Pipeline] sh 00:50:19.739 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:50:41.681 [Pipeline] catchError 00:50:41.683 [Pipeline] { 00:50:41.698 [Pipeline] sh 00:50:42.038 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:50:42.297 Artifacts sizes are good 00:50:42.306 [Pipeline] } 00:50:42.322 [Pipeline] // catchError 00:50:42.336 [Pipeline] archiveArtifacts 00:50:42.343 Archiving artifacts 00:50:42.674 [Pipeline] cleanWs 00:50:42.686 [WS-CLEANUP] Deleting project workspace... 00:50:42.686 [WS-CLEANUP] Deferred wipeout is used... 00:50:42.693 [WS-CLEANUP] done 00:50:42.695 [Pipeline] } 00:50:42.715 [Pipeline] // stage 00:50:42.720 [Pipeline] } 00:50:42.739 [Pipeline] // node 00:50:42.745 [Pipeline] End of Pipeline 00:50:42.794 Finished: SUCCESS